Recently, the eXplainable AI (XAI) research community has focused on developing methods making Machine Learning (ML) predictors more interpretable and explainable. Unfortunately, researchers are struggling to converge towards an unambiguous definition of notions such as interpretation, or, explanation—which are often (and mistakenly) used interchangeably. Furthermore, despite the sound metaphors that Multi-Agent System (MAS) could easily provide to address such a challenge, and agent-oriented perspective on the topic is still missing. Thus, this paper proposes an abstract and formal framework for XAI-based MAS, reconciling notions, and results from the literature.
CITATION STYLE
Ciatto, G., Schumacher, M. I., Omicini, A., & Calvaresi, D. (2020). Agent-Based Explanations in AI: Towards an Abstract Framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12175 LNAI, pp. 3–20). Springer. https://doi.org/10.1007/978-3-030-51924-7_1
Mendeley helps you to discover research relevant for your work.