Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.
CITATION STYLE
Reiter, E. (2019). Natural language generation challenges for explainable AI. In NL4XAI 2019 - 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Proceedings of the Workshop (pp. 3–7). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-8402
Mendeley helps you to discover research relevant for your work.