As human-managed systems become more complex, automated anomaly detection can provide assistance—but only if it is effective. Rigorous evaluation of automated detection is vital for determining its effectiveness before implementation into systems. We identified recurring issues in evaluation practices limiting the conclusions that can be applied from published studies to broader application. In this paper, we demonstrate the implications of these issues and illustrate solutions. We show how receiver operating characteristic curves can reveal performance tradeoffs masked by reporting of single metric results and how using multiple simulation data examples can prevent biases that result from evaluation using single training and testing examples. We also provide methods for incorporating detection latency into tradeoff analyses. Application of these methods will help to provide researchers, engineers, and decision makers with a more objective basis for anomaly detection performance evaluation, resulting in greater utility, better performance, and cost savings in systems engineering.
CITATION STYLE
Rieth, C. A., Amsel, B. D., Tran, R., & Cook, M. B. (2018). Issues and advances in anomaly detection evaluation for joint human-automated systems. In Advances in Intelligent Systems and Computing (Vol. 595, pp. 52–63). Springer Verlag. https://doi.org/10.1007/978-3-319-60384-1_6
Mendeley helps you to discover research relevant for your work.