Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps (Extended Abstract)

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

With advances in reinforcement learning (RL), agents are now being developed in high-stakes application domains such as healthcare and transportation. Explaining the behavior of these agents is challenging, as they act in large state spaces, and their decision-making can be affected by delayed rewards. In this paper, we explore a combination of explanations that attempt to convey the global behavior of the agent and local explanations which provide information regarding the agent's decision-making in a particular state. Specifically, we augment strategy summaries that demonstrate the agent's actions in a range of states with saliency maps highlighting the information it attends to. Our user study shows that intelligently choosing what states to include in the summary (global information) results in an improved analysis of the agents. We find mixed results with respect to augmenting summaries with saliency maps (local information).

Cite

CITATION STYLE

APA

Huber, T., Weitz, K., André, E., & Amir, O. (2022). Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps (Extended Abstract). In IJCAI International Joint Conference on Artificial Intelligence (pp. 5747–5751). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/803

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free