Recent work using word embeddings to model semantic categorization have indicated that static models outperform the more recent contextual class of models (Majewska et al., 2021). In this paper, we consider polysemy as a possible confounding factor, comparing sense-level embeddings with previously studied static embeddings on both coarse- and fine-grained categorization tasks. We find that the effect of polysemy depends on how one defines semantic categorization; while sense-level embeddings dramatically outperform static embeddings in predicting coarse-grained categories derived from a word sorting task, they perform approximately equally in predicting fine-grained categories derived from context-free similarity judgments. Our findings highlight the different processes underlying human behavior on different types of semantic tasks.
CITATION STYLE
Soper, E., & Koenig, J. P. (2022). When Polysemy Matters: Modeling Semantic Categorization with Word Embeddings. In *SEM 2022 - 11th Joint Conference on Lexical and Computational Semantics, Proceedings of the Conference (pp. 123–131). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.starsem-1.10
Mendeley helps you to discover research relevant for your work.