Explanation Regeneration via Information Bottleneck

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Explaining the black-box predictions of NLP models naturally and accurately is an important open problem in natural language generation. These free-text explanations are expected to contain sufficient and carefully-selected evidence to form supportive arguments for predictions. Thanks to the superior generative capacity of large pretrained language models (PLM), recent work built on prompt engineering enables explanations generated without specific training. However, explanations generated through single-pass prompting often lack sufficiency and conciseness, due to the prompt complexity and hallucination issues. To discard the dross and take the essence of current PLM's results, we propose to produce sufficient and concise explanations via the information bottleneck (EIB) theory. EIB regenerates explanations by polishing the single-pass output of PLM but retaining the information that supports the contents being explained by balancing two information bottleneck objectives. Experiments on two different tasks verify the effectiveness of EIB through automatic evaluation and thoroughly-conducted human evaluation.

Cite

CITATION STYLE

APA

Li, Q., Wu, Z., Kong, L., & Bi, W. (2023). Explanation Regeneration via Information Bottleneck. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 12081–12102). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.765

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free