Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends

120Citations
Citations of this article
248Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.

Cite

CITATION STYLE

APA

Wells, L., & Bednarz, T. (2021, May 20). Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends. Frontiers in Artificial Intelligence. Frontiers Media S.A. https://doi.org/10.3389/frai.2021.550030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free