Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models

12Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Commonsense reasoning benchmarks have been largely solved by fine-tuning language models. The downside is that fine-tuning may cause models to overfit to task-specific data and thereby forget their knowledge gained during pre-training. Recent works only propose lightweight model updates as models may already possess useful knowledge from past experience, but a challenge remains in understanding what parts and to what extent models should be refined for a given task. In this paper, we investigate what models learn from commonsense reasoning datasets. We measure the impact of three different adaptation methods on the generalization and accuracy of models. Our experiments with two models show that fine-tuning performs best, by learning both the content and the structure of the task, but suffers from overfitting and limited generalization to novel answers. We observe that alternative adaptation methods like prefix-tuning have comparable accuracy, but generalize better to unseen answers and are more robust to adversarial splits.

Cite

CITATION STYLE

APA

Ma, K., Ilievski, F., Francis, J., Ozaki, S., Nyberg, E., & Oltramari, A. (2021). Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5474–5483). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.445

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free