Implementation and Evaluation of Algorithms for Realizing Explainable Autonomous Robots

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

For autonomous robots to gain the trust of humans and maximize their abilities in society, they must be able to explain the reasons for their behavioral decisions. Defining explainable autonomous robots (XAR) as robots with such explanatory capabilities, we can summarize four requirements for their realization: 1) obtaining an interpretable decision space, 2) estimating the user's world model, 3) extracting important information for conveying policy in the robot, and 4) generating explanations based on explanatory factors. So far, these four elements have been studied independently. In this study, we first implement an explanatory algorithm that integrates these four elements. Then, we evaluate the implemented explanatory algorithm by conducting a large-scale subject experiment. The implemented explanation algorithm is shown to generate human-acceptable explanations; the results provide several insights and suggestions for future research on XAR. For example, we found that a robot that can give acceptable explanations to people is more likely to gain their trust. We also found that the questions 'Why A?' and 'Why not A?' should be explained in different ways.

Cite

CITATION STYLE

APA

Sakai, T., Nagai, T., & Abe, K. (2023). Implementation and Evaluation of Algorithms for Realizing Explainable Autonomous Robots. IEEE Access, 11, 105299–105313. https://doi.org/10.1109/ACCESS.2023.3303193

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free