Towards making NLG a voice for interpretable Machine Learning

14Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework called LIME. Our study shows that self-reported rating of NLG explanation was higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clear-cut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.

Cite

CITATION STYLE

APA

Forrest, J., Sripada, S., Pang, W., & Coghill, G. M. (2018). Towards making NLG a voice for interpretable Machine Learning. In INLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference (pp. 177–182). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-6522

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free