Explanation-based Finetuning Makes Models More Robust to Spurious Cues

4Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Large Language Models (LLMs) are so powerful that they sometimes learn correlations between labels and features that are irrelevant to the task, leading to poor generalization on out-of-distribution data. We propose explanation-based finetuning as a general approach to mitigate LLMs' reliance on spurious correlations. Unlike standard finetuning where the model only predicts the answer given the input, we finetune the model to additionally generate a free-text explanation supporting its answer. To evaluate our method, we finetune the model on artificially constructed training sets containing different types of spurious cues, and test it on a test set without these cues. Compared to standard finetuning, our method makes GPT-3 (davinci) remarkably more robust against spurious cues in terms of accuracy drop across four classification tasks: ComVE (+1.2), CREAK (+9.1), e-SNLI (+15.4), and SBIC (+6.5). The efficacy generalizes across multiple model families and scales, with greater gains for larger models. Finally, our method also works well with explanations generated by the model, implying its applicability to more datasets without human-written explanations.

Cite

CITATION STYLE

APA

Ludan, J. M., Meng, Y., Nguyen, T., Shah, S., Lyu, Q., Apidianaki, M., & Callison-Burch, C. (2023). Explanation-based Finetuning Makes Models More Robust to Spurious Cues. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 4420–4441). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.242

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free