Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.
CITATION STYLE
Wells, L., & Bednarz, T. (2021, May 20). Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends. Frontiers in Artificial Intelligence. Frontiers Media S.A. https://doi.org/10.3389/frai.2021.550030
Mendeley helps you to discover research relevant for your work.