Classifiers are applied in many domains where classification errors have significant implications. However, end-users may not always understand the errors and their impact, as error visualizations are typically designed for experts and for improving classifiers. We discuss the specific needs of classifiers' end-users, and a simplified visualization designed to address them. We evaluate this design with users from three levels of expertise, and compare it with ROC curves and confusion matrices. We identify key difficulties with understanding the classification errors, and how visualizations addressed or aggravated them. The main issues concerned confusions of the actual and predicted classes (e.g., confusion of False Positives and False Negatives). The machine learning terminology, complexity of ROC curves, and symmetry of confusion matrices aggravated the confusions. The end-user-oriented visualization reduced the difficulties by using several visual features to clarify the actual and predicted classes, and more tangible metrics and representation. Our results contribute to supporting end-users' understanding of classification errors, and informed decisions when choosing or tuning classifiers.
CITATION STYLE
Beauxis-Aussalet, E. M. A. L., Van Doorn, J., & Hardman, L. (2019). Supporting End-User Understanding of Classification Errors: Visualization and Usability Issues. The Journal of Interaction Science, 7, 29. https://doi.org/10.24982/jois.1814019.003
Mendeley helps you to discover research relevant for your work.