A Systematic Investigation of Commonsense Knowledge in Large Language Models

52Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Language models (LMs) trained on large amounts of data (e.g., Brown et al., 2020; Patwary et al., 2021) have shown impressive performance on many NLP tasks under the zero-shot and few-shot setup. Here we aim to better understand the extent to which such models learn commonsense knowledge - a critical component of many NLP applications. We conduct a systematic and rigorous zero-shot and few-shot commonsense evaluation of large pretrained LMs, where we: (i) carefully control for the LMs' ability to exploit potential surface cues and annotation artefacts, and (ii) account for variations in performance that arise from factors that are not related to commonsense knowledge. Our findings highlight the limitations of pre-trained LMs in acquiring commonsense knowledge without task-specific supervision; furthermore, using larger models or few-shot evaluation are insufficient to achieve human-level commonsense performance.

Cite

CITATION STYLE

APA

Li, X. L., Kuncoro, A., Hoffmann, J., de Masson d’Autume, C., Blunsom, P., & Nematzadeh, A. (2022). A Systematic Investigation of Commonsense Knowledge in Large Language Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 11838–11855). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.812

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free