Much work has been done recently on learning word embeddings from large corpora, which attempts to find the coordinates of words in a static and high dimensional semantic space. In reality, such corpora often span a sufficiently long time period, during which the meanings of many words may have changed. The co-evolution of word meanings may also result in a distortion of the semantic space, making these static embeddings unable to accurately represent the dynamics of semantics. In this paper, we present a novel computational method to capture such changes and to model the evolution of word semantics. Distinct from existing approaches that learn word embeddings independently from time periods and then align them, our method explicitly establishes the stable topological structure of word semantics and identifies the surprising changes in the semantic space over time through a principled statistical method. Empirical experiments on large-scale real-world corpora demonstrate the effectiveness of the proposed approach, which outperforms the state-of-the-art by a large margin.
CITATION STYLE
Wu, Z., Li, C., Zhao, Z., Wu, F., & Mei, Q. (2018). Identify shifts of word semantics through Bayesian surprise. In 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018 (pp. 825–834). Association for Computing Machinery, Inc. https://doi.org/10.1145/3209978.3210040
Mendeley helps you to discover research relevant for your work.