8 1/2

2Citations
Citations of this article
149Readers
Mendeley users who have this article in their library.

Abstract

The high cost and difficulty of realistic benchmarks encourages most computer purchasers to use informal performance validation methods during acquisitions. A variety of performance data is available - rating charts, references from current users, standardized benchmark results, modeling, and so on. There are also a variety of structured and unstructured approaches to evaluating this data. Experience shows that virtually all of this data is flawed. This paper summarizes the experiences of several federal agencies that have experimented with informal performance validation methods. It points out problems encountered, and suggests structured methods for responding to these problems.

Cite

CITATION STYLE

APA

McGalliard, J. (1993). 8 1/2. In 19th International Computer Measurement Group Conference, CMG 1993 (pp. 586–595). Computer Measurment Group Inc. https://doi.org/10.36019/9780813567501-004

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free