Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Within computational reinforcement learning, a growing body of work seeks to express an agent's knowledge of its world through large collections of predictions. While systems that encode predictions as General Value Functions (GVFs) have seen numerous developments in both theory and application, whether such approaches are explainable is unexplored. In this perspective piece, we explore GVFs as a form of explainable AI. To do so, we articulate a subjective agent-centric approach to explainability in sequential decision-making tasks. We propose that prior to explaining its decisions to others, an self-supervised agent must be able to introspectively explain decisions to itself. To clarify this point, we review prior applications of GVFs that involve human-agent collaboration. In doing so, we demonstrate that by making their subjective explanations public, predictive knowledge agents can improve the clarity of their operation in collaborative tasks.

Cite

CITATION STYLE

APA

Kearney, A., Günther, J., & Pilarski, P. M. (2022). Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge. Frontiers in Artificial Intelligence, 5. https://doi.org/10.3389/frai.2022.826724

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free