Building the Foundation of Robot Explanation Generation Using Behavior Trees

23Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

As autonomous robots continue to be deployed near people, robots need to be able to explain their actions. In this article, we focus on organizing and representing complex tasks in a way that makes them readily explainable. Many actions consist of sub-actions, each of which may have several sub-actions of their own, and the robot must be able to represent these complex actions before it can explain them. To generate explanations for robot behavior, we propose using Behavior Trees (BTs), which are a powerful and rich tool for robot task specification and execution. However, for BTs to be used for robot explanations, their free-form, static structure must be adapted. In this work, we add structure to previously free-form BTs by framing them as a set of semantic sets {goal, subgoals, steps, actions} and subsequently build explanation generation algorithms that answer questions seeking causal information about robot behavior. We make BTs less static with an algorithm that inserts a subgoal that satisfies all dependencies. We evaluate our BTs for robot explanation generation in two domains: a kitting task to assemble a gearbox, and a taxi simulation. Code for the behavior trees (in XML) and all the algorithms is available at github.com/uml-robotics/robot-explanation-BTs.

Cite

CITATION STYLE

APA

Han, Z., Giger, D., Allspaw, J., Lee, M. S., Admoni, H., & Yanco, H. A. (2021). Building the Foundation of Robot Explanation Generation Using Behavior Trees. ACM Transactions on Human-Robot Interaction, 10(3). https://doi.org/10.1145/3457185

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free