Abstract
Large neural language models (LLMs) can be powerful tools for research in lexical semantics. We illustrate this potential using the English verb break, which has numerous senses and appears in a wide range of syntactic frames. We show that LLMs capture known sense distinctions and can be used to identify informative new sense combinations for further analysis. More generally, we argue that LLMs are aligned with lexical semantic theories in providing high-dimensional, contextually modulated representations, but LLMs’ lack of discrete features and dependence on usage-based data offer a genuinely new perspective on traditional problems in lexical semantics.
Cite
CITATION STYLE
Petersen, E., & Potts, C. (2023). Lexical Semantics with Large Language Models: A Case Study of English break. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023 (pp. 490–511). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-eacl.36
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.