< previous page page_114 next page >

Page 114
The basic model underlying the CFA measures is one of measuring the "cost" of occupying memory during program execution. The S measure measures the size of the program representation. The M measure measures the number of bits that must be moved around from and to memory or other storage elements to ensure the execution of the program. Ultimately, the three measures represent a form of space-time product that determines how well a processor has used its memory.
Studying Instruction Sets
Nothing illustrates the changing nature of computer design more than the CFA study and the resulting Department of Defense "standard architecture"the MIL-STD-1750A. The CFA study [196] was a careful analysis of then (early 1970s) contemporary architectures. Considering more than twenty parameters, the study spanned issues such as code density, instruction set features, memory hierarchies, support, and ease of "assembly-level" programming. Study data provided valuable input into the definition of the 1750A, meant to be a standard architecture for future generations of DOD computers.
By the time the 1750A was defined and released, it was already obsolete! It was a 16-bit architecture, and the industry had begun defining 32-bit machines and microprocessors. There is an important moral here. It is insufficient to simply understand the past, however valuable such data may be. It is only when historical analysis can be coupled with realistic expectations of future technological developments that we have the basis for intelligent and long-lived computer design.

Some Comparative Results
Ideally, we would have some idea as to how much improvement is possible in an architecture. Measures can be defined based only on the high-level language (HLL) representation of the program that characterizes its static and dynamic behavior [93]. Under certain conditions, these measures represent a bound on certain aspects of program execution, and can be used as a baseline for comparing instruction sets.
These were developed to be architecture-independent measures of a program. The measures determine the smallest representation of a high-level language program and the minimal amount of "work" to execute that program.
How well do typical machines compare to the HLL baseline? The answer depends upon several things, only one of which is the instruction set itself.
Huck [135, 136] has conducted probably the most extensive analysis of familiar machines benchmarked to the HLL measures. His analysis is based on a series of scientific test programs, and his data on static program size is presented in Table 2.6. The HP Precision Architecture (PA RISC) is used as a representative L/S machine in these analyses. The most notable aspect of Huck's measurements is that they were carefully controlled using the same

 
< previous page page_114 next page >