You have heard all the hype about the Internet, and so none of it will be repeated here. However, you should consider a few points. The Internet is a big network (alright—a really big network) and, as a result, the information and data that you can access over it can be quite remote. This should have an impact on the way you design your applications. For example, you might get away with locking data in a database while a user browses it in a small, local desktop application, but this strategy will not be feasible for an application accessed over the Internet. Resource use impacts scalability much more for the Internet than for local applications.
Network bandwidth itself is also a scarce resource that should be used sparingly. You might notice variations in the performance of your own local network according to the time of day (networks always seem to slow down on a Friday afternoon just when you are trying to get everything done before the weekend), the applications that users in your company are running, and many other factors. But, no matter how variable the performance of your own local network is, the Internet is far more unpredictable. You are dependent on any number of servers routing your requests from your Web browser to the site you are trying to access, and the replies can get passed back along an equally tortuous route. The network protocols and data presentation mechanisms that underpin the Internet reflect the fact that networks can be (and at times most certainly will be) unreliable and that an application running on a server can be accessed by a user running one of many different Web browsers on one of many different operating systems.
A user gaining access to an application over the Internet by using a Web browser uses the Hypertext Transfer Protocol (HTTP) to communicate with the application. Applications are usually hosted by some sort of Web server that reads HTTP requests and determines which application should be used to respond to the request. The term “application” in this sense is a very loose term—the Web server might invoke an executable program to perform an action, or it might process the request itself by using its own internal logic or other means. However the request is processed, the Web server will send a response to the client, again by using HTTP. The content of an HTTP response is usually presented as a Hypertext Markup Language (HTML) page; this is the language that most browsers understand and know how to display.
HTTP is a connectionless protocol. This means that a request (or a response) is a stand-alone packet of data. A typical exchange between a client and an application running on a Web server might involve several requests. For example, the user can display a page, enter data, click some buttons, and expect the display to change as a result, allowing the user to enter more data, and so on. Each request sent by the client to the server is separate from any other requests sent both by this client and any other clients using the same server (and maybe running the same application) simultaneously. The problem is that a client request often requires some sort of context or state.
For example, consider the following common scenario. A Web application allows the user to browse goods for sale. The user might want to buy several items, placing each one in a virtual shopping cart. A useful feature of such an application is the ability to display the current contents of the shopping cart.
Where should the contents of the shopping cart (the client's state) be held? If this information is held on the Web server, the Web server must be able to piece together the different HTTP requests and determine which requests come from one client and which come from others. This is feasible, but might require additional processing to reconcile client requests against state information and, of course, it would require some sort of database to persist that state information between client requests. A complication with this technique is that the Web server has no guarantee; once the state information has been preserved, the client might submit another request that uses or removes the information. If the Web server saved every bit of state information for every client that used it, it could need a very big database indeed!
An alternative is to store state information on the client machine. The Cookie Protocol was developed to allow Web servers to cache information in cookies (small files) on the client computer. The disadvantages of this approach are that the application has to arrange for the data in the cookie to be transmitted over the Web as part of every HTTP request so that the Web server can access it. The application also has to ensure that cookies are of a limited size. Perhaps the most significant drawback of cookies is that users can disable them and prevent the Web browser from storing them on their computers, which will cause any attempt to save state information to fail.
From the discussion in the previous section, you can see that a framework for building and running Web applications has a number of items that it should address. It must do the following:
Support the standard HTTP
Manage client state efficiently
Provide tools allowing for the easy development of Web applications
Generate applications that can be accessed from any browser that supports HTML
Be responsive and scalable
Microsoft originally developed the Active Server Pages model in response to many of these issues. Active Server Pages allowed developers to embed application code in HTML pages. A Web server such as Internet Information Services (IIS) could execute the application code and use it to generate an HTML response. However, Active Server Pages did have its problems: you had to write a lot of application code to do relatively simple things, such as display a page of data from a database; mixing application code and HTML caused readability and maintenance issues; and performance was not always what it could be because Active Server Pages had to interpret application code in an HTML request every time the request was submitted, even if it was the same code each time.
With the advent of the .NET platform, Microsoft updated the Active Server Pages framework and created ASP.NET. The main features of ASP.NET include the following:
A rationalized program model using Web forms that contain presentation logic and code files that separate out the business logic. You can write code in any of the supported .NET languages, including C#. ASP.NET Web forms are compiled and cached on the Web server to improve performance.
Server controls that support server-side events but are rendered as HTML to allow them to operate correctly in any HTML-compliant browser. Microsoft has also extended many of the standard HTML controls as well, allowing you to manipulate them in your code.
Powerful Data controls for displaying, editing, and maintaining data from a database.
Options for caching client state using cookies on the client's computer, in a special service (the ASP.NET State service) on the Web server, or in a Microsoft SQL Server database. The cache is easily programmable by using code.
In the latest release of the .NET Framework supplied with Visual Studio 2005, Microsoft has further enhanced ASP.NET. A large number of improvements have been made to optimize throughput and Web site maintainability. Microsoft has also added the following features:
Enhanced page design and layout using Master Pages, Themes, and Web Parts. You can use Master Pages to quickly provide a common layout for all Web pages in an application. Themes help you implement a consistent look and feel across the Web site, ensuring that all controls appear in the same way if required. Web Parts enable you to create modular Web pages that users can customize to their own requirements. You will use Themes later in this chapter. Using Master Pages and Web Parts are outside the scope of this book, however.
New data source controls for binding data to Web pages. These new controls allow you to build applications that can display and edit data quickly and easily. The data source controls can operate with a variety of data sources, such as Microsoft SQL Server, Microsoft Access, XML files, Web services, and business objects that can return data sets. Using the data source controls provides you with a consistent mechanism for working with data, independent from the source of that data. You will make use of the data source controls in Chapter 27, “Securing a Web Site and Accessing Data with Web Forms.”
New and updated controls. For displaying and editing data Microsoft now provides the GridView, DetailsView, and FormView controls. You can use the TreeView control to display hierarchical data, and you can use the SiteMapPath and Menu controls to assist in user navigation through your Web application. You will use the GridView control in Chapter 27.
Enhanced security features with built-in support for authenticating and authorizing users. You can easily grant permissions to users to allow them to access your Web application, validate users when they attempt to log in, and query user information so you know who is accessing your Web site. You can use the Login control to prompt the user for their credentials and validate them, and the PasswordRecovery control for helping users remember or reset their password. You will use these security controls in Chapter 27.
Improved Web site configuration and management using the ASP.NET Web Site Administration Tool. This tool provides wizards for configuring and securing ASP.NET Web applications. You will use the ASP.NET Web Site Administration Tool in Chapter 27.
In the remainder of this chapter, you will learn more about the structure of an ASP.NET application.