Measuring the Knowledge Acquisition-Utilization Gap in Pre-trained Language Models

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

While pre-trained language models (PLMs) have shown evidence of acquiring vast amounts of knowledge, it remains unclear how much of this parametric knowledge is actually usable in performing downstream tasks. We propose a systematic framework to measure parametric knowledge utilization in PLMs. Our framework first extracts knowledge from a PLM's parameters and subsequently constructs a downstream task around this extracted knowledge. Performance on this task thus depends exclusively on utilizing the model's possessed knowledge, avoiding confounding factors like insufficient signal. Employing this framework, we study factual knowledge of PLMs and measure utilization across 125M to 13B parameter PLMs. We observe that: (1) PLMs exhibit two gaps - in acquired vs. utilized knowledge, (2) they show limited robustness in utilizing knowledge under distribution shifts, and (3) larger models close the acquired knowledge gap but the utilized knowledge gap remains.

Cite

CITATION STYLE

APA

Kazemnejad, A., Rezagholizadeh, M., Parthasarathi, P., & Chandar, S. (2023). Measuring the Knowledge Acquisition-Utilization Gap in Pre-trained Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 4305–4319). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.285

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free