The high cost and difficulty of realistic benchmarks encourages most computer purchasers to use informal performance validation methods during acquisitions. A variety of performance data is available - rating charts, references from current users, standardized benchmark results, modeling, and so on. There are also a variety of structured and unstructured approaches to evaluating this data. Experience shows that virtually all of this data is flawed. This paper summarizes the experiences of several federal agencies that have experimented with informal performance validation methods. It points out problems encountered, and suggests structured methods for responding to these problems.
CITATION STYLE
McGalliard, J. (1993). 8 1/2. In 19th International Computer Measurement Group Conference, CMG 1993 (pp. 586–595). Computer Measurment Group Inc. https://doi.org/10.36019/9780813567501-004
Mendeley helps you to discover research relevant for your work.