A Comprehensive Analysis of PMI-based Models for Measuring Semantic Differences

12Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

The task of detecting words with semantic differences across corpora is mainly addressed by word representations such as word2vec or BERT. However, in the real world where linguists and sociologists apply these techniques, computational resources are typically limited. In this paper, we extend an existing simultaneously optimized model that can be trained on CPU to perform this task. Experimental results show that the extended models achieved comparable or superior results to strong baselines in English corpora and SemEval-2020 Task 1, and also in Japanese. Furthermore, we compared the training time of each model and conducted a comprehensive analysis of Japanese corpora.

Cite

CITATION STYLE

APA

Aida, T., Komachi, M., Ogiso, T., Takamura, H., & Mochihashi, D. (2021). A Comprehensive Analysis of PMI-based Models for Measuring Semantic Differences. In Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, PACLIC 2021 (pp. 21–31). Association for Computational Linguistics (ACL). https://doi.org/10.5715/jnlp.30.275

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free