2.1.7 Address Calculation without Access

Calculating the address of an item of data is not the same thing as accessing the item of data. The address of an item is its storage position, not its value. To access the item means to read its current value from memory or to store a value in its memory location.

Many authors associate the process of selection with the idea of access to an item, rather than with the calculation of the storage address of the item. They say that the ith item of a structure can be selected if, given the value of i, the program can access the item in a time that is independent of i. Such a structure is called a random access structure. This definition served well for early computers, in which arrays could reside in a single homogeneous memory. However, current practice includes radically different memory architectures.

Modern computers have memory hierarchies?/FONT>that is, different kinds of memory?/FONT>whose access times vary by one or two orders of magnitude from one level of the hierarchy to the next. It may well be the case that some portion of an array is stored in fast memory, such as high-speed cache memory, and some portion in slower central memory. Then the access time for an array element would depend on the index of the element (that is, its position in the sequence of the array), but the time required to compute the storage address would still be independent of the value of the index. Consequently, the process of address calculation should be thought of as separate from the process of access to the data. Selection thus denotes address calculation without access. Although it is important to grasp this distinction between address calculation and actual access, it is generally ignored in the explanations here, as in other texts, for convenience of exposition.

Our interest in the effect of data structures on program efficiency concerns the timing of various algorithms. Because access times may vary, depending on features of the physical computer that are completely hidden from the language, time will be measured here as if each program were executed on a computer that has a single homogeneous central memory.

The special techniques necessary to write efficient programs for computers with hierarchical memories are not within the scope of this text. Some computers even allow the exploitation of any parallelism inherent in an algorithm, which means that more than one operation may be carried out at a time. This text also largely ignores this possibility of parallel processing, although it is currently a topic of computer research.

If this array of material leaves you in disarray, keep going. Pointers will point the way!