Limits for learning with Language Models

2Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

With the advent of large language models (LLMs), the trend in NLP has been to train LLMs on vast amounts of data to solve diverse language understanding and generation tasks. The list of LLM successes is long and varied. Nevertheless, several recent papers provide empirical evidence that LLMs fail to capture important aspects of linguistic meaning. Focusing on universal quantification, we provide a theoretical foundation for these empirical findings by proving that LLMs cannot learn certain fundamental semantic properties including semantic entailment and consistency as they are defined in formal semantics. More generally, we show that LLMs are unable to learn concepts beyond the first level of the Borel Hierarchy, which imposes severe limits on the ability of LMs, both large and small, to capture many aspects of linguistic meaning. This means that LLMs will continue to operate without formal guarantees on tasks that require entailments and deep linguistic understanding.

Cite

CITATION STYLE

APA

Asher, N., Bhar, S., Chaturvedi, A., Hunter, J., & Paul, S. (2023). Limits for learning with Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 236–248). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.starsem-1.22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free