Abstract
In psycholinguistics, semantic attraction is a sentence processing phenomenon in which a given argument violates the selectional requirements of a verb, but this violation is not perceived by comprehenders due to its attraction to another noun in the same sentence, which is syntactically unrelated but semantically sound. In our study, we use autoregressive language models to compute the sentence-level and the target phrase-level Surprisal scores of a psycholinguistic dataset on semantic attraction. Our results show that the models are sensitive to semantic attraction, leading to reduced Surprisal scores, although none of them perfectly matches the human behavioral patterns.
Cite
CITATION STYLE
Cong, Y., Chersoni, E., Hsu, Y. Y., & Lenci, A. (2023). Are Language Models Sensitive to Semantic Attraction? A Study on Surprisal. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 141–148). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.starsem-1.13
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.