An Argumentation-Based Approach for Explaining Goals Selection in Intelligent Agents

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

During the first step of practical reasoning, i.e. deliberation or goals selection, an intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve. Explainable Artificial Intelligence (XAI) systems, including intelligent agents, must be able to explain their internal decisions. In the context of goals selection, agents should be able to explain the reasoning path that leads them to select (or not) a certain goal. In this article, we use an argumentation-based approach for generating explanations about that reasoning path. Besides, we aim to enrich the explanations with information about emerging conflicts during the selection process and how such conflicts were resolved. We propose two types of explanations: the partial one and the complete one and a set of explanatory schemes to generate pseudo-natural explanations. Finally, we apply our proposal to the cleaner world scenario.

Cite

CITATION STYLE

APA

Morveli-Espinoza, M., Tacla, C. A., & Jasinski, H. M. R. (2020). An Argumentation-Based Approach for Explaining Goals Selection in Intelligent Agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12320 LNAI, pp. 47–62). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61380-8_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free