Visualizations for an explainable planning agent

11Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this demonstration, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making. Imposing transparency and ex-plainability requirements on such agents is crucial for establishing human trust and common ground with an end-to-end automated planning system. Visualizing the agent's internal decision making processes is a crucial step towards achieving this. This may include externalizing the “brain” of the agent: starting from its sensory inputs, to progressively higher order decisions made by it in order to drive its planning components. We demonstrate these functionalities in the context of a smart assistant in the Cognitive Environments Laboratory at IBM's T.J. Watson Research Center.

Cite

CITATION STYLE

APA

Chakraborti, T., Fadnis, K. P., Talamadupula, K., Dholakia, M., Srivastava, B., Kephart, J. O., & Bellamy, R. K. E. (2018). Visualizations for an explainable planning agent. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 5820–5822). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/849

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free