Machine Learning models can output confident but incorrect predictions. To address this problem, ML researchers use various techniques to reliably estimate ML uncertainty, usually performed on controlled benchmarks once the model has been trained. We explore how the two types of uncertainty - aleatoric and epistemic - can help non-expert users understand the strengths and weaknesses of a classifier in an interactive setting. We are interested in users' perception of the difference between aleatoric and epistemic uncertainty and their use to teach and understand the classifier. We conducted an experiment where non-experts train a classifier to recognize card images, and are tested on their ability to predict classifier outcomes. Participants who used either larger or more varied training sets significantly improved their understanding of uncertainty, both epistemic or aleatoric. However, participants who relied on the uncertainty measure to guide their choice of training data did not significantly improve classifier training, nor were they better able to guess the classifier outcome. We identified three specific situations where participants successfully identified the difference between aleatoric and epistemic uncertainty: placing a card in the exact same position as a training card; placing different cards next to each other; and placing a non-card, such as their hand, next to or on top of a card. We discuss our methodology for estimating uncertainty for Interactive Machine Learning systems and question the need for two-level uncertainty in Machine Teaching.
CITATION STYLE
Sanchez, T., Caramiaux, B., Thiel, P., & MacKay, W. E. (2022). Deep Learning Uncertainty in Machine Teaching. In International Conference on Intelligent User Interfaces, Proceedings IUI (pp. 173–190). Association for Computing Machinery. https://doi.org/10.1145/3490099.3511117
Mendeley helps you to discover research relevant for your work.