Explainable autonomy: A study of explanation styles for building clear mental models

23Citations
Citations of this article
106Readers
Mendeley users who have this article in their library.

Abstract

As unmanned vehicles become more autonomous, it is important to maintain a high level of transparency regarding their behaviour and how they operate. This is particularly important in remote locations where they cannot be directly observed. Here, we describe a method for generating explanations in natural language of autonomous system behaviour and reasoning. Our method involves deriving an interpretable model of autonomy through having an expert ‘speak aloud’ and providing various levels of detail based on this model. Through an online evaluation study with operators, we show it is best to generate explanations with multiple possible reasons but tersely worded. This work has implications for designing interfaces for autonomy as well as for explainable AI and operator training.

Cite

CITATION STYLE

APA

Chiyah Garcia, F. J., Robb, D. A., Liu, X., Laskov, A., Patron, P., & Hastie, H. (2018). Explainable autonomy: A study of explanation styles for building clear mental models. In INLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference (pp. 99–108). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-6511

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free