Human factors that affected the benchmarking of NAFIS: A case study

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The procurement of the National Automated Fingerprint Identification System (NAFIS) involved thorough and rigorous assessment of the system throughout the duration of the development phase of the contract. The system was benchmarked to assess operational accuracy and performance. HCI trials assessed usability and other human computer interaction factors. These were major issues that affected performance and had to be addressed to ensure the system was successfully implemented. Human factors such as the viewing of respondents on screen, interactive encoding, manual verification, search specification, training and the level of user experience of AFIS systems are often subjective, difficult to measure and hard to control in a test environment. These factors contribute to ambivalence in the accuracy data making it difficult to ascertain whether the performance measures for the system are a result of the system's accuracy or the user's ability to work with the interface. Much work was done in the design of the tests for NAFIS to mitigate these effects. The benchmark trials and HCI trials complemented each other by providing useful information for measuring the usability, throughput and operational performance of NAFIS but also helped to distinguish areas of the system where human factors affected accuracy and performance.

Cite

CITATION STYLE

APA

Suman, A. (2003). Human factors that affected the benchmarking of NAFIS: A case study. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2774 PART 2, pp. 1235–1244). Springer Verlag. https://doi.org/10.1007/978-3-540-45226-3_168

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free