MGR: Multi-generator Based Rationalization

22Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Rationalization is to employ a generator and a predictor to construct a self-explaining NLP model in which the generator selects a subset of human-intelligible pieces of the input text to the following predictor. However, rationalization suffers from two key challenges, i.e., spurious correlation and degeneration, where the predictor overfits the spurious or meaningless pieces solely selected by the not-yet well-trained generator and in turn deteriorates the generator. Although many studies have been proposed to address the two challenges, they are usually designed separately and do not take both of them into account. In this paper, we propose a simple yet effective method named MGR to simultaneously solve the two problems. The key idea of MGR is to employ multiple generators such that the occurrence stability of real pieces is improved and more meaningful pieces are delivered to the predictor. Empirically, we show that MGR improves the F1 score by up to 20.9% as compared to state-of-the-art methods.

Cite

CITATION STYLE

APA

Liu, W., Wang, H., Wang, J., Li, R., Li, X., Zhang, Y., & Qiu, Y. (2023). MGR: Multi-generator Based Rationalization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 12771–12787). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.715

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free