Developing the Design Criteria
After you've gained a reasonable understanding of the tangible and intangible goals of the project, you can begin developing the design criteria. Of course, things are never quite that neatly sequential, and in practice you will have been accumulating criteria during the process of goal definition. But for the sake of discussion, let's assume that you've got a list of project goals and now you need to prepare a list of criteria by which the success or failure of the system will be judged.
Design criteria are closely related to project goals. If the project goals tell you where you're going, the design criteria tell whether you've arrived. All of the design criteria for a system should directly support one or more of the system's goals. If you find yourself with important criteria that don't seem to match up with a goal, it's almost certainly an indication that your list of goals is incomplete.
Matching up each criterion to the goal it supports is not strictly necessary, but it's a useful exercise, even for experienced analysts. One of the greatest dangers with any project is that you don't know what you don't know. This is particularly the case if you're an outside consultant who might not have much (or any) experience with the activities of the organization. Mismatched goals and criteria are a good indication that you haven't yet understood everything you need to understand.
Design criteria generally take one of three forms:
The specific categories aren't dreadfully important, but the ones I've listed are a useful indication of how well you understand the system. Most criteria should fall into the first or second category. If you only have a list of design strategies, I'd be willing to bet that you don't have a clear enough understanding of the problem you're setting out to solve.
Whatever forms the criteria take, when you have met them, stop. Your project is finished. Go have a party. This injunction isn't quite as trivial as it might sound. Let me give you a common example. You're optimizing a particular piece of code. To meet the design criterion, the function must calculate a certain value in less than 10 seconds. You've got it down to 9, but you're sure that if you try this other approach you just thought of you'll cut that time in half. Don't. Or if you must, do it on your own time. You must not continue working after all the project criteria have been met, or the project will never be finished.
The only possible exception to this rule is with research and development (R&D) projects, but then R&D projects have different goals and hence different criteria. Rather than "complete the calculation in less than 10 seconds," the design criterion for an R&D project is more likely to be "determine the optimum method for calculating whatever." Since you can never be certain that any given solution is optimum, you can never meet the criterion; so you can explore forever or until you run out of money, whichever comes first.
It's important not to accidentally commit yourself to particular designs or architectures when you're developing design criteria. You might be certain that you're going to use Microsoft Transaction Server to support system scalability, but that's an architectural decision, not a design criterion. The design criterion is that the system must be scalable to support x users.
When in doubt, remember that a design criterion is used to determine whether a project has been successfully completed. In this case, ask yourself, "If we're using Microsoft Transaction Server, is our system finished?" Maybe, but the fact that you're using Microsoft Transaction Server isn't going to tell you that. However, the answer to "If we're scalable to x users, are we done?" is yes, provided, of course, that you've met the other design criteria as well.
Directly Measurable Criteria
I've already discussed the importance of identifying objectively measurable goals. If you've been successful in this, many of your design criteria will follow as a matter of course. If the goal is to reduce processing time by 50 percent and the current processing time is 10 minutes, then the design criteria is, obviously, "Enable processing to be completed in 5 minutes or less."
It can sometimes be difficult to distinguish measurable goals from directly measurable criteria. I don't know that the issue is terribly important. The specification police will not arrest you for listing something as both a goal and a criterion. They might give you a warning, however, if you fail to support a measurable goal with one or more design criteria.
Be careful not to micromanage your design criteria. It might be the case that for a specific process to be completed in 1 minute, a given query must execute in less than 10 seconds. But that's really an implementation issue, and you don't yet know enough about the project to make implementation decisions.
The majority of environmental constraints represent the existing operating environment and any legacy systems with which you must interact. It's relatively rare to have a clean slate on which to specify a system. In most cases, your clients will have a hardware and software environment in which the new system will be expected to operate.
The other main source of environmental criteria is specific to database systems: the volume of data to be handled. One of my early disgraces as an independent consultant was preparing a quote for a sales-tracking system for a regional branch of a computer hardware wholesaler. After discussing their requirements, I presented them with a quote for a little system to be written in Microsoft Access 2.0, only to be told that they wanted to track sales for the entire company, with some 500 regional branches and tens of millions of dollars in salesobviously far beyond the capabilities of Access. I had assumed they only needed to track that branch's sales. Oops. (Needless to say, I didn't get the job.)
In looking at data volume, you need to examine two issues. One is the sheer volume of data, and the other is its growth pattern. A library might have millions of volumes but add only a few records a day. An ordering system might add hundreds of records daily but archive the records after the sales are complete, so the absolute number of records is never more than a few thousand. Obviously, these two patterns require different design strategies.
Supporting data volume is one situation for which you're justified in over-designing the system. As a general rule, I'd say plan for at least 10 percent more capacity than the largest figure provided by your client, and round up. For smaller systems, I'd increase that to between 20 percent and 25 percent extra capacity.
Data volume is less an issue with larger volumes of data. A well-designed client/server system can support 100,000 records as easily as it can support 10,000. But a LAN-based Access system originally designed to support a few thousand records using Jet will probably not scale well to a few million, no matter how well it is designed.
The other primary source of environmental criteria is the number of users the system needs to support. Most systems have more than one category of user, and you'll need to define the requirements for each. For example, the order-processing system will have users entering orders, obviously. It will probably have a second group of users inquiring about the status of orders and perhaps updating the data, and a third group producing reports from the entire database. Each of these groups needs different support from the system, so each should be specified in separate design criteria.
You must also distinguish between users who are connected to the system and those who are actually using the system. The Jet database engine, for example, has a limit of 255 users connected to the database at any one time. This means that 255 people can have the database open simultaneously. It doesn't mean that 255 people can update the database simultaneously.
General Design Strategies
Some project goals don't translate easily to simple numeric measurements. A goal such as "improve data entry accuracy," for example, is extremely difficult to quantify, since this is a situation in which the cost of determining how many errors are made will probably exceed the benefit of having a number by which to measure your success.
You shouldn't ignore these kinds of goals; you can state them in terms of design strategies rather than measurable criteria. In this case, the design criterion might be "improve data entry accuracy by allowing users to select from lists wherever feasible" or "reduce the incidence of credit exceptions by implementing appropriate credit checks prior to accepting the invoice."
Just as with the measurable design criteria, you should not be too specific here and accidentally make implementation decisions. You're not designing the system; you're only establishing criteria by which its success will be judged. The previous examples talk about doing things "wherever feasible" and performing "appropriate checks"; the specifics are deferred to later stages in design when the system requirements are better understood.
But you should also avoid "motherhood" statements. "The system must be user friendly" sounds admirableafter all, nobody wants to work with a user-antagonistic system. But it's not a useful criterion. Determining whether a system conforms to the criterion "The system will comply with the Windows Interface Guidelines for Software Design" is possible. Whether a system is user friendly, however, is too often a matter of debate. You want your design criteria to reduce contention, not increase it.