We normally measure the performance of a given machine for running AIPS by measuring the time taken to run the large DDT test and comparing the resulting time to a reference machine. This gives a single figure that we call an AIPSmark. The exact definition of an AIPSmark is listed with measurements for several machines on the AIPS benchmark page.
Workstation vendors commonly quote performance figures in SPECmarks. The SPEC benchmark suite is defined by the Standard Performance Evaluation Corporation and test results may be viewed on that site. It comprises a collection of real-world programs that have been divided into integer-dominated and floating-point dominated categories. The results of the current suite of SPEC benchmarks are summarized as the SPECint(95) and SPECfp(95). An extensive list of SPEC results was maintained on the Web by John DiMarco at the University of Toronto until December 2000. The table may still be fetched by anonymous ftp.
Since both AIPSmarks and SPECmarks are measures of the speed at which a machine runs a selected set of real programs we should expect that there should be a fairly simple relation between the two. Using a set of 6 machines for which I could find AIPSmarks figures and for which I had enough information to locate SPECmark results without intensive research I find that the following relation gives a fair, ball-park estimate of the AIPSmark that can be expected on a machine with known SPEC(95) benchmarks.
AIPSmark(93) = 0.898 * SPECfp(95) + 0.110 * SPECint(95) - 1.665
Obviously, there must be some non-linearity at the lower end since slow machines will not run the large DDT in reverse but we do see a strong relationship between SPECfp(95) and AIPSmark(93) as we would expect. A more extensive examination of the DDT memos might produce a more accurate relationship and establish some error bounds on the regression coefficients.
A number of people have speculated that the disappointing AIPSmarks obtained on Pentium and Pentium Pro systems is due to bad AIPS performance on these systems. We can check whether this is so using the relationship given above (note that the Gateway P90 system was the only Pentium system used to derive the relationship).
For a 166MHz Pentium (Intel XXpress):
SPECint(95) | 4.76 |
SPECfp(95) | 3.37 |
Predicted AIPSmark(93) | 1.88 |
The actual measured AIPSmark(93) on a Gateway 166 MHz Pentium was 1.94. In this case we appear to get almost exactly what we expect. Any perception that AIPS is slow relative to other programs running on a Pentium is probably due to the relatively poor floating-point performance relative to its integer performance (most RISC machines have a SPECfp(95) at least as large as their SPECint(95).
For a 200 MHz Pentium Pro (Intel Alder):
SPECint(95) | 8.09 |
SPECfp(95) | 6.75 |
Predicted AIPSmark(93) | 5.28 |
The actual figure for a 200 MHz Pentium Pro machine is 3.30 a good deal lower than we would expect from the SPECmarks. Using the baseline SPECfp(95), where the same compiler options are used for every program in the test suite instead of tuning the compiler options for each program individually, reduces the predicted AIPSmark(93) to 4.6. This might indicate that AIPS is being penalized by the lack of highly-optimizing compilers for the Pentium Pro under Linux. It is also possible that the I/O system on the machine tested wasn't matched to the CPU performance. More detailed information from a DDT test on a Pentium Pro machine would be helpful.
Last updated on 24 March 1997.
Chris Flatters