Many works have proposed architectures and models to incorporate explanation within agent’s design for various reasons (i.e. human-agent teamwork improvement, training in virtual environment [10], belief revision [8], etc.), with this novel architectures a problematic is emerged: how to communicate these explanations in a goal-directed and rule-governed dialogue system? In this paper we formalize Walton’s CE dialectical system of explanatory dialogues in the framework of Prakken. We extend this formalization within the Extended CE system by generalizing the protocol and incorporating a general account of dialectical shifts. More precisely, we show how a shift to any dialogue type can take place, as an example we describe a shift to argumentative dialogue with the goal of giving the explainee the possibility to challenge explainer’s explanations. In addition, we propose the use of commitment and understanding stores to avoid circular and inconsistent explanations and to judge the success of explanation. We show that the dialogue terminates, under specific conditions, in finite steps and the space complexity of the stores evolves polynomially in the size of the explanatory model.
CITATION STYLE
Arioua, A., & Croitoru, M. (2015). Formalizing explanatory dialogues. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9310, pp. 282–297). Springer Verlag. https://doi.org/10.1007/978-3-319-23540-0_19
Mendeley helps you to discover research relevant for your work.