Issues and advances in anomaly detection evaluation for joint human-automated systems

10Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As human-managed systems become more complex, automated anomaly detection can provide assistance—but only if it is effective. Rigorous evaluation of automated detection is vital for determining its effectiveness before implementation into systems. We identified recurring issues in evaluation practices limiting the conclusions that can be applied from published studies to broader application. In this paper, we demonstrate the implications of these issues and illustrate solutions. We show how receiver operating characteristic curves can reveal performance tradeoffs masked by reporting of single metric results and how using multiple simulation data examples can prevent biases that result from evaluation using single training and testing examples. We also provide methods for incorporating detection latency into tradeoff analyses. Application of these methods will help to provide researchers, engineers, and decision makers with a more objective basis for anomaly detection performance evaluation, resulting in greater utility, better performance, and cost savings in systems engineering.

Cite

CITATION STYLE

APA

Rieth, C. A., Amsel, B. D., Tran, R., & Cook, M. B. (2018). Issues and advances in anomaly detection evaluation for joint human-automated systems. In Advances in Intelligent Systems and Computing (Vol. 595, pp. 52–63). Springer Verlag. https://doi.org/10.1007/978-3-319-60384-1_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free