On generating explainable plans with assumption-based argumentation

9Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Planning is a classic problem in Artificial Intelligence (AI). Recently, the need for creating “Explainable AI” has been recognised and voiced by many researchers. Leveraging on the strength of argumentation, in particular, the Related Admissible semantics for generating explanations, this work makes an initial step towards “explainable planning”. We illustrate (1) how plan generation can be equated to constructing acceptable arguments and (2) how explanations for both “planning solutions” as well as “invalid plans” can be obtained by extracting information from an arguing process. We present an argumentation-based model which takes plans written in a STRIPS-like language as its inputs and returns Assumption-based Argumentation (ABA) frameworks as its outputs. The presented plan construction mapping is both sound and complete in that the planning problem has a solution if and only if its corresponding ABA framework has a set of Related Admissible arguments with the planning goal as its topic. We use the classic Tower of Hanoi puzzle as our case study and demonstrate how ABA can be used to solve this planning puzzle while giving explanations.

Cite

CITATION STYLE

APA

Fan, X. (2018). On generating explainable plans with assumption-based argumentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11224 LNAI, pp. 344–361). Springer Verlag. https://doi.org/10.1007/978-3-030-03098-8_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free