Large Scale Substitution-based Word Sense Induction

7Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora. The result is a corpus which is sense-tagged according to a corpus-derived sense inventory and where each sense is associated with indicative words. Evaluation on English Wikipedia that was sense-tagged using our method shows that both the induced senses, and the per-instance sense assignment, are of high quality even compared to WSD methods, such as Babelfy. Furthermore, by training a static word embeddings algorithm on the sense-tagged corpus, we obtain high-quality static senseful embeddings. These outperform existing senseful embeddings methods on the WiC dataset and on a new outlier detection dataset we developed. The data driven nature of the algorithm allows to induce corpora-specific senses, which may not appear in standard sense inventories, as we demonstrate using a case study on the scientific domain.

Cite

CITATION STYLE

APA

Eyal, M., Sadde, S., Taub-Tabib, H., & Goldberg, Y. (2022). Large Scale Substitution-based Word Sense Induction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 4738–4752). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.325

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free