Are Pretrained Language Models Symbolic Reasoners Over Knowledge?

34Citations
Citations of this article
135Readers
Mendeley users who have this article in their library.

Abstract

How can pretrained language models (PLMs) learn factual knowledge from the training set? We investigate the two most important mechanisms: reasoning and memorization. Prior work has attempted to quantify the number of facts PLMs learn, but we present, using synthetic data, the first study that investigates the causal relation between facts present in training and facts learned by the PLM. For reasoning, we show that PLMs seem to learn to apply some symbolic reasoning rules correctly but struggle with others, including two-hop reasoning. Further analysis suggests that even the application of learned reasoning rules is flawed. For memorization, we identify schema conformity (facts systematically supported by other facts) and frequency as key factors for its success.

Cite

CITATION STYLE

APA

Kassner, N., Krojer, B., & Schütze, H. (2020). Are Pretrained Language Models Symbolic Reasoners Over Knowledge? In CoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 552–564). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.conll-1.45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free