|
|
|
|
|
|
|
register allocation for the L/S will cancel out the advantage of the mainframe's RM format. Table 3.1 illustrates the L/S instruction set evolution. |
|
|
|
|
|
|
|
|
3.2.2 Format Distribution |
|
|
|
|
|
|
|
|
Instruction size and the relative frequency of instructions determine instruction bandwidth. Instruction bandwidth is measured as the number of bits required from memory to support the execution of the program without the presence of a cache. |
|
|
|
|
|
|
|
|
The required instruction bandwidth is a product of the number of instructions (dynamic count) that are required to be executed and the average size of each instruction. Architectures that allow only one instruction size (the L/S architectures) are the simplest. Slightly more complex are the R/M architectures, which have two or three different size instructions. R+M instruction sets, especially as typified by VAX, provide the most variation in instruction sizing. Note that the expected instruction size in R+M is greater than in R/M (scientific). This accounts for the use of longer memory-to-memory formats in R+M, also allowing significantly fewer instructions executed per 100 HLL operations (Table 3.3). The design target data are shown in Table 3.2. |
|
|
|
|
|
|
|
|
3.2.3 Operation Set Distribution |
|
|
|
|
|
|
|
|
Table 3.3 shows the expected number of instructions executed (scientific environment) per 100 HLL operations executed using a common source workload. Tables 3.43.6 show the expected opcode distribution for the various architecture families for scientific applications (Table 3.4), commercial environments (Table 3.5), and systems (workstation) environments (Table 3.6). The tables are broken down by a classification scheme originally developed a number of years ago by Gibson [103, 105]. The breakdown of operations within each of these classes is described later. A primary concern is to understand the reasons for differences that occur among architectures given the same source program and level of compilation, and to understand the expected performance differences that arise because of these architectural differences. |
|
|
|
|
|
|
|
|
The commercial application environment is a very important segment of the processor marketplace and is generally overlooked by those primarily interested in the scientific environment. This area has long been the mainstay for mainframe processors managing large "farms" of disks for centralized database inventory control and transactions management. We break the data into two classes: (i) the "classical" commercial (Cobol) such as payroll, report generation, etc., and (ii) the online transaction processing (written in C). This latter class generally behaves similar to a systems environment and many nonscientific workstation applications. The first class (Table 3.5) we generally refer to as "commercial," and the second class (Table 3.6) we refer to as ''systems" environments. |
|
|
|
|
|
|
|
|
We provide the L/S data in Table 3.5 merely for completeness. This is what we would expect to happen if the L/S compilers, etc., executed the workload of our R/M architecture in the same way, simply mapping over R/M (or R+M) constructs into an L/S format. The result is an increase in dynamic |
|
|
|
|
|