Chapter 5. Presentation Tier Scalability
Many developers believe design patterns
and scalability do not go hand in hand. They argue that patterns add
layers to an application, so the server must perform more operations
and use more memory to handle each request. The extra operations slow
response time down, and the increase in memory means fewer clients
can be supported per server. By itself, this is a fair assessment,
and if no two requests were alike, it might be the end of the story.
In an enterprise application, however, many clients need to access
similar data. On a site that publishes stock quotes, for example, the
server may respond to thousands of requests a minute for the same
stock. If the price of the stock changes every five minutes, it would
be massively inefficient to contact the stock market for each
request. Even in an online bank, where every user wants to view
personal data, resources such as database connections do not need to
be recreated for every request.
Often, we can sacrifice some speed up front for better performance in
the average case. While the first request for a particular stock
quote or the first connection to a particular database might require
a lot of work, subsequent requests will be much faster. It is fair to
say the system's scalability will increase: we can
support more requests in the same amount of time.
In this chapter, we look at three patterns that increase the
scalability of the presentation tier using variations of this
concept:
- Asynchronous Page
-
Shows how to cache data, such as stock prices, and use it to generate
dynamic pages.
- Caching Filter
-
Can be used to cache entire dynamic pages after they are generated.
- Resource Pool
-
Describes how to create a "pool" of
large or expensive objects that can be loaned out as needed, saving
instantiation costs.
|