Determining the System Goals
Defining (or discovering) the goal and scope of a system ought to be a simple process. Sometimes, if you're really lucky, it is. Sometimes a workable definition of the project's goal and scope are part of your initial brief. More often, defining them is a complicated process combining formal analysis techniques, design trade-offs, and more than a little diplomacy.
The goal of a development project is usually the most important factor in determining both the system's scope and the design criteria by which it will be evaluated. The goal is, after all, the reason the system is being implemented at all. Obviously, you won't be able to make informed decisions regarding any other aspect of the project until you have a clear understanding of what you're setting out to achieve.
Do not confuse "goal" and "brief." Most projects start with a description of the system to be developed and a budget. Your brief is to "automate the current order-taking system," and you have one year and half a million dollars to do it. But "automating the current order-taking system" is a brief, not a goal. The goal is the reason, or more likely the set of reasons, the project is being undertaken.
In fact, to talk about determining the system's "goal" at all is somewhat misleading. The vast majority of systems have many goals, both tangible and intangible, and discovering them might require some detective work. Why is the order-taking system being automated? Is it to speed up the process? Improve accuracy? Reduce costs? Position the company in the minds of the users? Make the manager look good? The system goals probably include all these reasons and half a dozen others.
Now, I'm not suggesting that you start invading people's privacy or requesting confidential company information, and some of these goals come under the general heading of "none of your business." You don't need to know that the department head is jumping on the Internet bandwagon out of fear. You do need to know that in order to pay for itself the system needs to reduce the average time to process an order from 10 minutes to 2 minutes.
You might also need to understand those vague phrases sales and marketing people use, like "product positioning" and "managing user expectations." Fortunately, this doesn't require going back to school; you can simply ask your client. Everybody means something a little different by these expressions, so you'll have to ask anyway.
The problem with intangible goals like these is that they're often difficult to translate into measurable design criteria. Sometimes a little judicious digging can turn an intangible goal into a tangible one. For example, the goal "assist in managing user expectations" usually indicates a customer service problem, which will either translate easily into measurable criteria"we need to tell the customer how long it will take to fill the order"or needs to be discarded.
If your client has a problem meeting delivery schedules and is reasonably confident that this is due to salespeople agreeing to impossible delivery dates in order to make a sale, then the order-taking system has a direct bearing on the goal of managing user expectations. You could, for example, impose a constraint on the minimum time between the date the order is taken and the date delivery is promised. If, however, the problems are due to quality control in the manufacturing process, an order-taking system can do nothing to help and it's incumbent on you to point this out to your client. That doesn't mean the project isn't worth undertaking; however, everyone involved needs to understand that the order-taking system can't directly impact the stated goal.
Not all intangible goals succumb so easily to translation, of course. "Positioning" is one of those. You might be asked to establish a Web site to "position the company on the cutting edge of technology." Most of these types of goals either don't actually mean anything or the fact of a system's existence will prove sufficient to meet them. It's easy enough to determine if this is the case; just ask the client how you'll know whether you've achieved the goal. Chances are good that if the other goals regarding performance and functionality are met, the intangible goal will be met as well. (Chances are also good that you'll get to watch a marketing person shuck and jive. I just love that, don't you?)
Another kind of statement that needs to be examined carefully is a goal that is stated in general terms like "improve" and "reduce." "Increase efficiency" and "improve productivity" are very common, and very vague, goals. How are you going to know if you've achieved them? A client of mine once shared a wonderful (and almost certainly apocryphal) story about the importance of measurability in job requirements. (Design criteria and job requirements are, after all, similar in purpose.) The story goes that, as a young salesman, he was told that part of his job was to "promote our goods and services." So he opened the office door and yelled, "Everybody should buy our stuff because it's really neat." This is not, I'm sure, what his manager had in mind.
During the initial analysis, unless you value your sense of humor over your bank balance, you'll have to determine the degree to which some general improvement is required. Increase efficiency by how much? Improve productivity from what to what? But there's another trap to look out for here. It's all very well to say that goals should be directly measurable, and "reduce the time required to process an invoice from 10 minutes to 2 minutes" is clearly preferable to "increase efficiency." But the first statement assumes that you know how long an invoice currently takes to process, and finding that out can be an expensive exercise.
The cost of research can often exceed the risk of making a mistake. In our invoicing example, it probably isn't necessary to send in a team of analysts with stopwatches to determine precisely how long it takes to process an invoice, although I've seen it done. Some years ago, I was involved in a project where a government department spent upwards of $50,000 determining whether it could justify the purchase of a single copy of off-the-shelf graphing software with a recommended retail price of $2,500. (And I confess, I pointed out the situation precisely once, and then I was snide all the way to the bank. I consider being paid to do something stupid by a government agency to be a unique form of tax refund.)
The solution to translating these general requirements into tangible design criteria lies in a sense of scale and the concept of "good enough." If you're betting the company or someone's career on the implementation of a new computer system, you'd better be very, very sure of what you're doing. If you're building a little system that's not going to have a major effect on the company's bottom line, you can afford to be more casual. To return to our example, it's probably "good enough" to know that the average person can currently process about 25 invoices a day. There's no need to perform a detailed study because the department manager can almost certainly tell you this; it's why you were called in. I'm sure that over-extensive research is how the Department of Defense winds up ordering $400 screwdrivers. (And before you start sending me hate mail, I had absolutely nothing to do with that one.)
Why a certain improvement needs to be made is always worth asking as well. There might be, for example, a processing backlog, and the manager is faced with either speeding up the process or hiring additional staff. If you know about the backlog and the projected increases in sales (as in our invoicing example), you can determine how much of an improvement is actually required.
The figure you arrive at might be different from the one your client initially gives you. Obviously, if your client wants to reduce the processing time by half, you should do everything you can to achieve that. But if you know the system actually only needs to achieve a 25-percent reduction, you have room to negotiate if you need to make trade-offs between the flat rate of processing and ease of use or system reliability.
You often overhear computer people remark that their lives would be easier if their clients knew what they wanted. Your clients do know what they want; they just don't know how to translate that need into a computer system. That's your job. Your clients are not going to knowingly make unreasonable demands, mislead you, or blame you for things that aren't your fault. But part of your job is to help your clients decide what the proposed database system can and cannot do to assist them. It's unreasonable to expect them to know the capabilities and limitations of the technology up front.
A variation on this theme is the client who presents you with a stack of screen layouts and sample reports that might or might not be possible to implement. This is a case of being told the solution rather than the problem. It takes a certain amount of tact to examine the reasoning behind the "system design" without implying that the individual who created it is stupid, incompetent, or simply in the wrong line of work.
All I can suggest for these situations is that you test the waters. If the client seems resistant to your questions, try saying something along the lines of "I'll be better able to help you if I understand your business environment." If this approach isn't getting you anywhere, you're going to have to either implement the system as presented or walk away from the project (which I realize is not always possible). The best you can hope for is to review the design with which you have been presented, and if you find any fundamental flaws, discuss them in terms of "I can't do this, but I could do that or that. Which would best meet your needs?"
The process of eliciting goals described earlier is not specific to database systems. The primary way in which database systems differ from other computer systems is that they have, almost as a by-product, a body of data about the organization. This body of data, whether it's a list of subscribers or a set of invoices, might have intrinsic value to the organization above and beyond the work processes that it directly supports.
Of course, I'm not suggesting that every project be looked at as an opportunity to create an enterprise-wide data repository. I'm suggesting that the data that forms a part of the system be examined for its value to other areas of the organization or other processes within the same area. It might be the case that the data your system will accumulate could be easily made available to some other area, although frankly, these opportunities occur far less often than some might think.
For example, if the invoicing system maintains a list of customers and the Sales department needs a mailing list for sending out newsletters, it might be appropriate to make the list available to them. If nothing else, sharing the list would save some poor clerical worker from the task of typing all those names and addresses again. But please note that "making the data available" is absolutely not the same as incorporating the mailing list functionality into the invoicing systemthere be dragons.
The only reason I suggest considering the issue at all, given that it is fraught with the dangers of megalomania, is that it might be appropriate to make minor changes to the data structure to accommodate other uses. We'll look at this possibility in detail in Chapter 12, but let me give you a simple example here.
Remember that when I talked about atomic values I said that an address could be, within the semantics of a given system, simply a blob that gets printed on a mailing label. I recommended that in this case you consider treating the address as a single attribute. If, however, the data might be useful to another area, but only if the attributes are separated, then it is reasonable to add that small extra overhead to the current system in order to avoid duplicating the data entry elsewhere.
Be careful, however, that the extra overhead really is small and that sharing the data really is feasible. I have seen (and to my shame, even implemented) systems that require entire categories of information to be entered that have no direct bearing on the process at hand, simply because they might be useful to someone at some point in time. This is amazingly easy to do by accident, so be sure that when you talk about "planning for future growth" you don't actually mean "adding unnecessary overhead."
Once you've established your initial set of goals, you can move to the next activities in the analysis process: establishing design criteria and project scope. Do not make the mistake, however, of assuming that goals are stable. You must always be prepared to reevaluate the system goals during later stages of the project.
For any project that lasts more than a few weeks, the business requirements are subject to change. Sales can vary wildly from projections; company mergers can mean a surplus of staff rather than a shortage; any number of external events can require reevaluation of the project's goals. Check in with the client now and again during a long project to ensure that nothing has radically changed.
Even with projects of relatively short duration, you might discover during later stages of the project that some of the goals were either inappropriate or unattainable. One of the major problems with the classical waterfall model, you'll remember, is that it assumes that you can know everything you need to know when you need to know it. In reality, you'll be expanding your understanding of what the system needs to do throughout the project. New understanding will often require a reevaluation of the system goals, even if that means only reviewing them and saying, "Yep, these are all still valid."