This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework called LIME. Our study shows that self-reported rating of NLG explanation was higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clear-cut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
CITATION STYLE
Forrest, J., Sripada, S., Pang, W., & Coghill, G. M. (2018). Towards making NLG a voice for interpretable Machine Learning. In INLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference (pp. 177–182). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-6522
Mendeley helps you to discover research relevant for your work.