Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: Explainable decisions in Multi-Agent Environments (xMASE). We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI systems' decisions in multi-agent environments.
CITATION STYLE
Kraus, S., Azaria, A., Fiosina, J., Greve, M., Hazon, N., Kolbe, L., … Vollrath, M. (2020). Ai for explaining decisions in multi-agent environments. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13534–13538). AAAI press. https://doi.org/10.1609/aaai.v34i09.7077
Mendeley helps you to discover research relevant for your work.