Pages

Sunday, 29 July 2012

Sysinternals Suite


Sys internals Suite



The Sysinternals Troubleshooting Utilities have been rolled up into a single Suite of tools. This file contains the individual troubleshooting tools and help files. It does not contain non-troubleshooting tools like the BSOD Screen Saver or NotMyFault.
The Suite is a bundling of the following selected Sysinternals Utilities:
You can also click on the below links to try different.
dn

N-Tier Architecture: The Business Rules Layer


N-Tier Architecture: The Business Rules Layer


The Business Rules Layer in an N-Tier architecture is that layer that contains the Business logic and / or Business rules of the application. Reserving a separate layer strictly for Business logic in an N-Tier architecture is a major advantage, in that any changes that need to be made to Business rules can be made here without having any effect on other applications. {mosgoogle center}
Assuming that the interface among the different layers stays the same, changes that are made to the functionality / processing logic in the Business Rules Layer can be readily made without having any affect on the others. In the past, many client server applications failed to implement because the changing of the Business rules or logic was such a difficult process.
The Business Rules Layer in an N-Tier system is where all the application’s brainpower resides. It contains data manipulation, Business rules, and all the other important components that your Business needs to survive. If you happen to be creating a search engine and have the need to weight or rate every matching item according to some sort of custom criteria – let us say the number of times a keyword was found in the result of a quality rating – then the Business rules layer should be placed on this layer. The Business Logic Layer does not know anything about HTML – it does not out put HTML, either.
Nor does the Business Logic Layer care about things like SQL or ADO. The Business Logic Layer should not contain any code that enables it to access the database. Such tasks should be assigned to corresponding layers above and below it.

Business Rules Layer in Three Tier Architectures

While N-Tier architecture typically refers to three or more levels, the fact is that three level architectures are the most common. The three levels consist of the user interface, the Business rules layer, and the data. It used to be, in classic ASP, that the user interface level would consist of HTML tags alongside VBScript, which was used to construct the page layout. The data level would be a database, such as SQL server.
The Business rules layer was oftentimes underused in a really horrible fashion – with detrimental results. A lot of times the Business rules layer in Classic ASP Web applications would not even have any Business rules in it. It would merely consist of a stored procedure wrapped inside Visual Basic pass through components.
The only real reason why Classic ASP developers would even bother with VB components was due to the performance advantage of leveraging compiled code. It is rather obvious that the term “object oriented” was just an object of lip service throughout this Classic ASP period.

The Business Rules Layer in ASP.NET

Let us take a look at how the Business rules layer functions in the realm of ASP.NET. In this format, every page is inherently an object, so it would follow that logically every page is an object oriented code. This has not been the case, however, for a lot of developers who previously worked in Classic ASP who are now making the transition to ASP.NET.
Many of these developers are still thinking in the old fashioned manner when it comes to the Business rules layer of their applications. The vast majority of ASP.NET applications one sees in today’s environment utilize increasingly thicker user interface levels alongside incredibly thin Business rules levels.
In these terms, the performance advantage argument does not apply. All ASP.NET code is compiled, so the benefit of performance advantage does not exist. The only advantage that does remain is the fact that stored procedure calls are encapsulated.
The only thing that is missing now are the Business rules. IN order to effectively leverage object oriented concepts in one’s development, one must first change their way of thinking about the Business rules layer. Rather than considering it as merely a way of extracting data from the database, the Business rules components should be build as a Software Development Kit in a fashion that is similar to the .NET Framework. This process should be made easy with the object oriented tools that .NET supplies, such as polymorphism and inheritance.
In a Business’s .NET object hierarchy, the name of the Business is the top level namespace in a similar vein to the System namespace in the .NET Framework. The rest of the namespace hierarchy consists of functional and application sub systems. These should stay as close as possible to the standard namespaces that were implemented within the .NET Framework.
Even incredibly organized .NET namespace systems will not necessarily solve the problem, especially if the Business rules components do not contain the correct logic. There are a few vital things that all Business rule layers should contain. The components, for one thing, must always be self validating.
When Business logic components are built in to an SDK, that means you are effectively disconnecting it from your Web application, as well as any other input validation that it may perform. Thus, Business rules components should be the last line of defense to ensure that strictly valid values make their way in to the database.
Even in those instances when you have field constraints that have been implemented in to the database, it is still vital to validate input data as a means of throwing more informative custom exceptions to the component’s consumers.
In terms of database connectivity, each of the Business rules components that communicate with a database must inherit from a base class that implements a property for a database connection string. Since components may indeed be used by a number of different consumers, it is not good to have any connection string retrieval logic baked in to them. It is a lot better to have the consumer provide the necessary data at run time.
The user interface level of any application should always be free of ADO.NET – and that includes ASP.NET web applications. While ADO.NET might be a quality form of technology for the retrieval and manipulation of data, it does not work well when it comes to representing logical entities within an application.
Typed DataSets can be employed, but they are quite often a nightmare to manage. Instead, Business rules components should distill ADO.NET objects in to objects that are representative of the entities within your system. So say, for example, that you are retrieving a list of customers from your database.
When doing this, do not return a DataSet with customer data on it. Rather, create a typed collection of Customer object, and then pass that back. In the process of constructing the typed collection, one can also perform any other logic that was not performed initially by the stored procedure.
You should still be able to bind your typed collection as long as it manages to implement the IEnumerable interface. One should also be able to take advantage of custom properties, sub objects, and methods that have been implemented within the objects.

The Bottom Line

Ultimately, the code behind class for each of the ASP.NET pages must contain nothing more than the glue that binds a company’s Business rules SDK with the elements of the page, as well as the elements of the page to each other through event wire ups.
All the real logic that comprises the web application should be put in to a framework of Business rules components that may be re-used by a variety of different consumers, including web services, mobile devices, Windows Forms applications, and more.
A move to ASP.NET entails a lot more than merely changing a platform or technology that has been used in the past. It also means changing one’s mindset about the way in which an application has been constructed, and where its pieces should go.

N-Tier Architecture Presentation Logic Layer


N-Tier Architecture Presentation Logic Layer


The Presentation Layer in an N-Tier structure is commonly referred to as the “client” layer. It consists of parts that are used to present data to an end user. Examples of components on the Presentation Layer might include edit boxes, labels, text boxes, grids, buttons, Windows or Web forms, or more. The Presentation Layer can be either Windows based or Internet based.
What an Internet Based Presentation Layer Looks Like
Let us take a look of a correctly formatted Presentation Layer. This one consists of a Web server (IIS, in this instance), Web Pages, and Web Components.


All of this will be seen by the end user on a browser, such as Internet Explorer. Simple, right?
Now let’s take a look at a more in depth Presentation Logic Layer.
As you can see by the diagram above, the Presentation Logic Layer works so as to provide your user with an interface in to your application. It consists of standard things you are probably already quite familiar with, including Windows forms and ASP documents. It thus relies on the results generated by the Business Tier in order to transform data in to something that can be used, read, and understood by the end user.  

Conclusion

The presentation layer is also sometimes referred to as the client layer. It consists of components that serve to present data to the end user. This data might include Windows or online buttons and forms, boxes for editing or texts, grids, labels, and more. In short, the presentation layer is a key component of any N-Tier system; without it, as the name infers, nothing will be presented to the end user, no matter how well the system functions otherwise.

N-Tier Application Partitioning


N-Tier Application Partitioning


Application partitioning is a vital process, as it provides one with the opportunity to clearly define an even distribution of an application’s presentation, process, and key data components – without which, you may find yourself feeling quite lost. The components may be distributed over several different physical machines, or across a vast array of memory address spaces. 
Application partitioning serves to maximize the inherent benefits of a multi tiered computer model, in that it distributes application processing across all spectrums of the system’s resources. For those who wish to achieve quality N-Tier distributed computing throughout their Business, application partitioning is a necessary step. How it is done can be critical for the outcome of the N-Tier application.
When it comes to the successful execution of an application partition, the only limitation of the number of tiers is the number of computers available. As the load of processing is theoretically meant to be distributed across a vast array of different processors, the ultimate in N-Tier client server scalability can be attained via the process of application partitioning.
In the past, client server architectures were riddled with numerous problems and limitations. Some of these included a low level of reliability, an excessive amount of clients that resulted in overloading, reduced bandwidth on a network, a reduced level of performance, high maintenance needs, and a low level of flexibility. Application partitioning is a successful way of overcoming all these limitations.
Application partitioning provides users with two essential benefits. The first is that it reduces the amount of turnaround time that is necessary for the data result. At the same time, it decreases the level of network traffic that necessitates the data transfer to the client.
The partitioning of applications is a major void filler when it comes to large scale client server systems. Application partitioning allows for a flexible distribution of application logic, the end result being an optimization of performance.
Of course, a successful application partitioning project is largely contingent on quality tools that will be able to leverage such emerging technologies as object oriented systems as well as peer to peer communications. As a result, better performance, reliability, transparency, and flexibility are all provided for large scale enterprise wide computing systems – in short, a must for any successful Business operating in this day and age.

Application Development

The partitioning of application provides a client server application development team with sufficient tools needed to support the architecture of an N-Tier application, as well as the capability of constructing an application that is truly distributed.
With the use of application partitioning instruments, the application can be designed on a single client PC machine. Afterwards, parts of it can then be relocated to any server that the network is able to access.
The application should be viewable by developers as a single logical program. The developers should not have to concern themselves with such issues as what components are going to be deployed on clients and what components will be deployed on servers, or whether the machines in question shall be Macintoshes or PC computers.
Once the application has been built and tested, it will be possible to partition it. That should theoretically be as simple a task as merely dragging an application object over to a server icon. Application partitioning tools are then generated and compiled on to native 3GL code in the background on the target servers, which in turn perform the necessary processing.
A client program containing one or more service programs should be the end result.
Such programs are meant to be run on specific forms of hardware, and to also interface with designated software, such as operating systems, GUI, middleware, database management systems, or communications mechanisms.

N-Tier Application Partitioning Benefits

Application partitioning brings with it numerous benefits. Some of these benefits appeal to the more Business oriented consumer, while other benefits fall on the technical side of things.
In today’s Business environment, it is of vital importance to take in to consideration performance issues. Those who use client server application development tools of the first generation often come to the conclusion that such simple applications, which typically consist of interpreted 4GL code that runs either on PCs or on work stations, might very well indeed provide adequate response times to a limited array of users; the problem is when the quantity of users increases, or when more complex applications are needed. In those instances, the level of performance (i.e. perceived user response times) is not likely to be very adequate.
The deployment of application components that are involved in major Business processes on a strong UNIX server that is accessible to all client PCs can do you a major favor by freeing up scarce resources on the PC clients so that presentation components may be freely accessed.
At the same time, by placing those application components that perform the majority of the interactions with the RDBMS on the same machine that the RDBMS is on, one can seriously reduce the amount of network traffic as well as the network resources’ contention.
Another benefit of application partitioning is an increase in application performance. It enables data processing logic to be closer to the data, while Business processing logic can be placed on to a faster application server.
In short, the primary benefits of application partitioning are an increase in the scalability of applications; the support of numerous and diverse configurations for both hardware and software; an increase in security that allows one to isolate sensitive processes that may also be Business critical; a higher degree of maintenance capabilities that enable one to isolate components of the application that tend to change often, and only one or a few copies of the utilized components; the reuse of components and objects – this enables services to be shared among and within various applications; an increase in support of the organization’s overall structure – Business data and Business processing logic are readily available to be deployed and in close proximity to end users and / or owners; and a separation of the rules of Business from both data and presentation.
Moreover, services can be readily partitioned in order to allow for sharing – and that extends not merely to clients working within a particular application, but among clients working in several different (separate) applications.

Objects and Components

Taking such a modular approach to application design, coupled with defining and using well defined component interfaces, lets vital bits of Business processing logic (i.e. rules of the Business) to be defined in Business Objects and subsequently reused among numerous different application subsequently. This type of reuse enables consistency to function across the Business while also allowing for maintenance to be easily performed, as the components that are reusable can readily be employed in several places, but their definition and maintenance occurs in just one place. Moreover, any single component of the application in question can be simply updated without interfering with any other components.
Another option is to alter the platform upon which several specific components are operating. For example, one can upgrade a server machine to a newer model that contains more processing power – and do that without having to alter the components that are running on other servers or clients.

N-Tier Applications

N-Tier applications are useful, in that they are able to readily implement Distributed Application Design and architecture concepts. These types of applications also provide strategic benefits to solutions at the enterprise level. It is true that two tier, client server applications may seem deceptively simple from the outset – they are easy to implement and easy to use for Rapid Prototyping. At the same time, these applications can be quite a pain to maintain and secure over time.
N-Tier applications typically come loaded with the following components:
  • Security. N-Tier applications come with logging mechanisms, monitoring devices, as well as Appropriate Authentication, ensuring that the device and system is always secure.
    .
  • Availability + Scalability. N-Tier applications tend to be more reliable. They come loaded with fail over mechanisms like fail over clusters to ensure redundancy.
    .
  • Manageability. N-Tier applications are designed with the following capabilities in mind: deployment, monitoring, and troubleshooting. N-Tier devices ensure that one has the sufficient tools at one’s disposal in order to handle any errors that may occur, log those errors, and provide guidance towards correcting those errors.
    .
  • Maintenance. Maintenance in N-Tier applications is easy, as the applications adopt coding and deployment standards, as well as data abstraction, modular application design, and frameworks that enable reliable maintenance strategies.
    .
  • Data Abstraction. N-Tier applications make it so that one can easily adjust the functionality without altering other applications.
As useful as they are, there are also situations when an N-Tier application might not be the most ideal solution for one’s Business needs. Most of all, one should keep in mind that building an N-Tier application involves a lot of time, experience, skill, commitment, and maturity – not to mention the high cost. If one is insufficiently prepared in any of these areas, then building an N-Tier application might not be appropriate for you at this moment. Above all, building a successful N-Tier application necessitates a favorable cost benefit ratio from the outset.
First off, you should fully understand what an N-Tier application is, what it does, and how it functions. To put it the simplest terms possible, N-Tier applications help one distribute a system’s overall functionality in to a multitude of layers or “Tiers”. 
In a usual implementation, for example, you will most likely have at least some of the following layers, if not all: Presentation, Business Rules, Data Access, and Database. In some instances, it could be possible to split one or more of these different layers in to many different sub layers. It is possible to develop each of these layers separately from the others, as long as it can communicate with the other layers and adhere to the standards that have been set out in the specifications.

N-Tier Application Manageability


N-Tier Application Manageability


While it is a fact that N-Tier applications tend to provide almost limitless scalability, the desire to change or add new forms of functionality can present a challenge in more than one arena. Growth on a large scale can make capacity planning quite hard. When available resources have been exhausted by applications, then there must be some sort of provision made to borrow resources in order to support unexpected workloads. This is where manageability becomes key. 
Manageability entails the sharing of resources, simplicity, and centralized management. Organizations are forced by complexity to maintain competitive levels of service via a flexible architecture that allows for reactive scalability as a means of having a positive impact on both cost and service level. Such challenges have proven that traditional forms of architecture are not able to efficiently make use of existing Information Technology infrastructures.

Service Quality

Each and every year, more and more users are depending on the World Wide Web as a means for conducting Business both in the personal and corporate sectors. Moreover, Businesses must differentiate between different classes of users while accounting for the different forms of usage. They must also maximize service level provisioning while simultaneously including performance, predictability, and service availability in these measures.
As a result the platform infrastructure must be designed with predictable and differentiated qualities of service in mind. What is needed is an infrastructure that will be able to support an application approach that is service based. This must be an N-Tier architecture that includes accounting and management, dynamic resource allocation, cluster support, infrastructure management, heterogeneous legacy integration, multi platform Java technology, as well as a multilevel security model.

Scalability

As the world wide web continues to increase in vitality and more Businesses than ever before are going global, the cost of providing quality customer service is increasing thanks to the rapid changes in company growth, the cost of management, the complexities of implementation, the pace of deployment, and more. Businesses that wish to survive in this fast paced environment must provide high standards of service in their global operations in order to gain an advantage over the competition while simultaneously fostering the loyalty of consumers.
These days, competition is only a mouse click away. Thanks to all these changes in the Business sector, the scalability of infrastructures, availability, and manageability all become central factors as a means of increasing levels of service. Obviously, the ubiquity of the World Wide Web requires an increase in agility and flexibility. Enterprise wide information tools and infrastructures must constantly be improved in the new global competitive Business environment.
As the internet and corporate intranets continue to grow at a dizzying rate, Businesses have to position themselves for the agility and growth that is necessary to take on an increasing amount of users, more services, and a more challenging workload. Business requirements that change rapidly tend to force information systems to operate together with external and corporate resources in a reliable, interactive, secure fashion. At the same time, the flexibility to adapt to rapidly changing Business atmospheres has to be maintained.
The Information Technology infrastructure is now critical to competitiveness in the economic sector. Whereas this infrastructure once functioned as an internal form of support, nowadays it serves as the Business’s main profit vehicle and enables transactions to occur. Such demands tend to push the limits of information infrastructures as they currently exist. In order to remain competitive, Businesses are increasingly seeking solutions that manage to safeguard current investments in infrastructure while also deploying the necessary capabilities to provide a high degree of predictability, flexibility, and availability – all factors for success.
For front end Web server implementations, the scale out approach is a very good idea. It enables service requests to be handled by a pool of servers that are configured in a similar fashion, each of which provides the same services to all of the clients. The load balancing appliance or router distributes incoming requests evenly across the arm of the server. A hot standby load balancing appliance is in place to ensure there are no failure points, as well as redundant ISP connections.
As middleware applications grow more sophisticated, so does the practicality and value of scaling out in to the central tier of the N-Tier model. Just as in the front end instance, Businesses are then able to add on computing power in increments utilizing pools of cost effective Intel based servers. Rather than continuously outgrowing and then having to replace single server solutions, Businesses can then add servers on as they need them in order to accommodate growth over time.
Those seeking a major example of scaling out at its finest need look no further than the popular search engine Google. As a matter of fact, Google takes its scaling out process to the logical extreme, hosting both its search engine and its database over several thousand cheap uni-processor Intel based servers. Each of Google’s Intel servers is configured with two resident disk drives. As a means of streamlining its operations in such a huge distributed atmosphere, Google decided to develop its own applications for such functions as new server builds, load balancing, and remote management.
E-Business has been increasing the complexity as well as the volume of Business data. As more and more applications are integrated in to the enterprise and the volume of users increases, the integrity of data has to be controlled across larger stores of data. While clustering might be a common feature on the database and back end layers of the N-Tier model, the utilization of redundant numbers of cheap servers is not a very practical option at this point in time. Rather, a more traditional scaling up approach should continue to be the main method of scaling database applications in the near future.

The Utilization of Resources

By improving resource utilization, one can drastically reduce the costs of providing increased levels of service. Control through software and hardware must be taken on as a means of enabling more than just one single application to run on a single machine. It is paramount to consolidate servers if you want to increase your return on investment from resources that are under utilized. As mainframes tend to run at eighty to ninety percent of their capacity, systems that are distributed tend to run at merely fifteen to twenty five percent. Organizations are beginning to find that adjustments in allocation have to be made in order to take total advantage of available resources.
What is vital is finding a method for running several different applications on a single server. Each application should be given a minimal level of service that is free from resource contention and security concerns. Exercising control should enable dynamic adjustment via management policies.

Availability and Predictability

Businesses in this day and age are driven by information. IT infrastructures thus have a high demand placed on them – higher than ever before. There is an increasing need to get access to and analyze corporate data in real time, analyze trends, update databases, provide a high level of customer satisfaction – and all that in a 24-7 time structure. Computers can no longer just be used to increase capacity levels – they also have to be reliable, available, and predictable as a means of meeting user and application requirements.
A data center has to be available, due to the increasingly unpredictable demands made by the World Wide Web. Competition nowadays is just a click away; thus, services have to be made available around the clock so that they are always accessible to both clients and customers. Disruption of service has to be minimized – especially during routine maintenance and system upgrades.
It is necessary for systems to be capable of being patched online, repaired, and debugged. Resources have to be redirected in a direct, automatic, and dynamic fashion to make sure that service levels are maintained. In order to ensure maximum affectivity, Businesses must learn to deliver capacity, availability, and predictability through a well chosen infrastructure. They must also have a readily scalable and manageable operating system to work with.
Above all, Businesses must take the three P’s in to consideration: that is, people, product, and process. People and process will generally account for eighty percent of the system’s capacity to remain available; only twenty percent originates from within the system. It is thus vital to keep in mind product manageability – which helps reduce operator errors.
Manageability has an affect on both the people and process aspects of the system’s availability. By increasing availability, disciplined processes and procedures must be maintained in a consistent fashion. In order to have an impact on availability, infrastructure platforms have to simplify deployment, maintenance, and management segments of the operation.

Manageability

In the scaling process of an IT infrastructure, a lot more complexity is the inevitable result. This increase in complexity has the unfortunate effect of rendering the data center environment much less capable of coping with rapid changes in applications as well as Business demand for services. It is true that the effort that is necessary for the management of resources tends to grow at a much faster pace than the resources themselves.
Thus, manageability tends to have a great impact on both availability and scalability. In order to be as effective as possible, a Business must make improvements on management efforts while further simplifying data center architecture. Towards that end, it is necessary for Businesses large and small to centralize, automate, and simplify as many processes as possible, while simultaneously incorporating a management framework that improves platform architecture manageability.

What is N-Tier Architecture?


What is N-Tier Architecture?

Introduction

This is the first in a series of articles exploring the world of n-tier architecture in terms of the Microsoft .NET platform and associated framework. 
The first of these is meant as an introduction to n-tier architecture and as such tries to explain the reasoning behind developing applications in this way, and how it can be achieved without going into complex implementation details. That will come later. I suppose that by even mentioning n-tier in my opening sentence I've jumped the gun somewhat so let me backtrack slightly and explain. 
The first question to ask is if this is just a new fad or fashion. After all, we've been through several iterations of various architectures all of which have failed at some level. Well, maybe! Modern architectural development techniques evolve and are based on our latest failures. This is a good thing. It shows that we are learning from our mistakes. Sure we have had a few setbacks (like the thin client episode), but in general everything exists for a reason and has developed from decades of hit-and-miss projects. 
So how often have you implemented a new system and had a nagging doubt in your mind whether it would stand the test of time? - the main question being: "What if my system is spectacularly successful? Will it become a victim of its own success, or will I be lucky?" 
For the last few years, system architects have been touting splitting a system into tiers. Unfortunately, many companies have yet to embrace it, fearing that they will overcomplicate their systems, increase maintenance costs (after all, if a system is in several places, it must be more expensive to run) and push up salaries because they have to hire more qualified staff. 
Let me get something straight. In the first place you should never initiate a new systems development project without a good business case. This usually boils down to the fact that the system you implement will help your company make even more money. If you can't justify it in those terms, dump the project. Therefore, I can guarantee that n-tier systems will save you money in the short- to medium-term in hardware, software development and software maintenance costs. 
In the next few articles I will show you how. 

N-Tier Explained

For those who haven't read (or quite understood) the hundreds of other articles on multi-tier development, here is a quick reminder. It is perhaps useful to go through the various stages that we, as software developers, have been through in order to give us some perspective on what is actually possible today. 

Single Tier

When automation first hit business, it was in the form of a huge "Mainframe" computer. Here, a central computer served the whole business community and was accessed via dumb terminals. All processing took place on a single computer - and therefore in one place. All resources associated with the computer (tape and disk drives, printers etc.) were attached to this same computer. This is single tier (or 1-tier) computing. It is simple, efficient, uncomplicated, but terribly expensive to run. 
Figure 1 - Single Tier Architecture
Figure 1 - Single Tier Architecture
Figure 1 shows a physical layout of a single tier environment. All users run their programs from a single machine. The ease with which deployment and even development occurs makes this model very attractive. The cost of the central machine makes this architecture prohibitive for most companies, especially as system costs and return on investment (ROI) are looked at carefully nowadays. 

Dual Tier Environments

In the 1980s, a revolution happened in the world of business computing: the Personal Computer (nearly unimaginable until then) hit the streets. The PC quickly became the standard equipment on desks around the world. The demand for personal software was also in demand and, with the advent of Windows 3.0, this demand became a roar. 
In order to provide personal software which ran on personal computers, a model needed to be found where the enterprise could still share data. This became known as the client/server model. The client (on the personal computer) would connect to a central computer (server) and request data. With limited network bandwidth, this would offload the need for expensive infrastructure since only data would be transmitted, not the huge graphics necessary to make a windows application display. In Figure 2, we see the very model implemented in most organizations today. This model is also quite easy to implement. All you need is an RDBMS, such as MS-SQL Server 2000, running on Windows 2000 Server and a PC running TCP/IP. You application connects to the database server and requests data. The server just returns the data requested. 
Figure 2 - Client Server Physical Model
Figure 2 - Client Server Physical Model
There are, however, several problems with this model: 
  1. The connections are expensive - they take a long time to establish and require a lot of RAM on the server. Because the fact of connecting is slow, most applications connect when launching the client application and disconnect when the application is shut down. Of course, if the application crashes, then the connection is left open on the server and resources are lost.
  2. One can only connect a limited number of users to a server before SQL server spends more time managing connections than processing requests. Even if you are willing to increase server resources exponentially, as your user base grows (get your corporate wallet out) there still comes a time when your server will choke. This can be solved in part by splitting the database in two or three and replicating the data. This is definitely NOT recommended as replication conflicts can occur. The more users you connect, the more errors you're likely to get.
  3. This method is so cost-ineffective. Many users only use their connection 2-3% of the time. The rest of the time it is just sitting there hogging memory resources. This particular problem is usually resolved artificially, by using a TP monitor such as Tuxedo which pools and manages the connections in order to provide client/server for the masses. TP monitors are quite expensive and sometimes require their own hardware platform to run efficiently.
Figure 3 puts the above into context and shows the logical system components - most of which are on the client. 
Figure 3 - Logical View of a 2-Tier Architecture
Figure 3 - Logical View of a 2-Tier Architecture

The Alternatives

With the advent of the Internet, many people jumped to the conclusion that the days of the mainframe were back. Client/Server had obviously failed, personal computers had failed and, above all, Windows was on its way out. A host of "thin client" applications were developed usually by overzealous IT managers hoping to wrest computing control back from the users. TCO - Total Cost of Ownership was the watchword of the day and everyone was consumed by downsizing the client. Thus 3-tier applications were born. These applications run the traditional client/server model but from a web server. 
Figure 4 - 3-Tier Thin Client Architecture
Figure 4 - 3-Tier Thin Client Architecture
The client only displays the user interface and data, but has no part in producing the results. Figure 4 shows the physical representation of such architecture, whilst Figure 5 gives a logical view. 
This architecture presents one advantage over the former: a well implemented web server can manage and pool database connections as well as running the applications. The disadvantage is that the web server is quickly overwhelmed by requests and must either be clustered or upgraded. 
Figure 5 - Logical 3-Tier View
Figure 5 - Logical 3-Tier View
Did you also notice that the software model has not significantly changed over the 2-tier model? We have merely moved the 2-tier client processing onto the web server. Also the thin client user interfaces, by their very nature, are not as rich as their Windows counterparts. Therefore, applications developed using this model tend to be inferior to their Windows counterparts. 
The clue in really making an application scalable, as you may have guessed, is to split up the processing (in the red boxes) between different physical entities. The more we can split it up, the more scalable our application will be. 
"Isn't this expensive?" I hear you cry? I need a different server for each layer. The more layers, the more machines we will need to run. Well, this is true if you have thousands of users accessing your system continuously, but if you don't, you can run several layers on the same machine. Also, purchasing many lower-spec servers is more cost-effective than one high-spec server. 
Let's explore the various layers we can create, starting from the logical model in Figure 5 

Overview of an N-Tier System

The Data Layer

The data layer can usually be split into two separate layers. The first will consist of the set of stored procedures implemented directly within the database. These stored procedures will run on the server and provide basic data only. Not only are they pre-compiled and pre-optimized, but they can also be tested separately and, in the case of SQL Server 2000, run within the Query Analyzer to make sure that there are no unnecessary table scans. Keep them as simple as possible and don't use cursors or transactions. Cursors are slow because processing rows one by one instead of as a set is inefficient. Transactions will be handled by the layer above, as ADO.NET gives us much more control over these things. 
Figure 6 - N-Tier Logical  Model
Figure 6 - N-Tier Logical Model
The next layer consists of a set of classes which call and handle the stored procedures. You will need one class per group of stored procedures which will handle all Select, Insert, Update, and Delete operations on the database. Each class should follow OO design rules and be the result of a single abstraction - in other words handle a single table or set of related tables. These classes will handle all requests to or from the actual database and provide a shield to your application data. All requests must pass through this layer and all concurrency issues can and must be handled here. In this way you can make sure that data integrity is maintained and that no other source can modify your data in any way. 
If your database changes for any reason, you can easily modify your data layer to handle them without affecting any other layers. This considerably simplifies maintenance. 

Business Rule Layer

This layer is implemented in order to encapsulate your business rules. If you have followed best practices, you will have created a set of documents which describe your business. In the best of cases, you will have a set of use-cases describing your business in precise detail. From this you will have been able to create a class association diagram which will help you create your business layer. 
Here we find classes which implement your business functionality. They neither access data (except through the data layer) nor do they bother with the display or presentation of this data to the user. All we are interested in at this point are the complexities of the business itself. By isolating this functionality, we are able to concentrate on the guts of our system without the worry of design, workflow, or database access and related concurrency problems. If the business changes, only the business layer is affected, again considerably simplifying future maintenance and/or enhancements. 
In more complex cases it is entirely possible to have several business layers, each refining the layer beneath, but that depends on the requirements of your system. 

Workflow Layer

This is one of the optional layers and deals with data flow to and from your system. It may or may not interact directly with the user interface, but always deals with external data sources. 
For instance, if you send or receive messages from a messaging queue, use a web service for extra information, send or receive information to another system, the code to handle this would be in this layer. You may wish to wrap your whole application in XML so that the choice of presentation layer can be expanded. This would also be handled in the Workflow Layer. 

Presentation Layer

This layer handles everything to do with the presentation of your system. This does not just include your windows or web forms (or your user interface), but also all the classes which will help you present your data. 
Ideally, your event method implementations within your form classes will only contain calls to your presentation layer classes. The web or windows forms, used for visual representation only interface seamlessly with the presentation layer classes which handle all translation between the business layer/workflow layer and the forms themselves. This means that any changes on a visual front can be implemented easily and cheaply. 

Bottom Line

Can you see the pattern? Each section (or layer) of your application is a standalone entity which can communicate with layers above and below it. Each layer is designed independently and protected from the others by creating extensible interfaces. All changes can therefore be encapsulated within the layer and, if not major, will not necessarily affect layers above and below it. 
So how have we managed for so long with the 2-tier client/server model? Well, we haven't really managed at all. We've shoe-horned applications into architectures instead of architecting solutions in order to provide perfect fit. Why? Because solutions involving any degree of distribution were difficult to implement cost-effectively - that is until now. 
Although the exact implementation can vary in terms of the .NET Framework in that you have the choice of using Web Services, Enterprise Serviced Components, and HTTP or TCP/IP Remoting, the fact remains that we now have all the tools necessary to implement the above. If you are using or thinking of using the .NET platform and framework, you would be well advised to architect in several tiers. 
In the next article I will show you how to do just that.

About the author

Karim Hyatt is an application development architect and consultant, based in Luxembourg with over 20 years experience in the business. 
He started developing Windows applications with a special release of Windows 2.0 and quickly moved on to version 3 SDK. Having been through several iterations of learning new APIs and Frameworks such as MFC and ATL, .he decided to get on board with .NET in the early days of the beta 1 release.