Natural language generation challenges for explainable AI

27Citations
Citations of this article
109Readers
Mendeley users who have this article in their library.

Abstract

Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.

Cite

CITATION STYLE

APA

Reiter, E. (2019). Natural language generation challenges for explainable AI. In NL4XAI 2019 - 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Proceedings of the Workshop (pp. 3–7). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-8402

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free