Word embeddings encode semantic meanings of words into low-dimension word vectors. In most word embeddings, one cannot interpret the meanings of specific dimensions of those word vectors. Nonnegative matrix factorization (NMF) has been proposed to learn interpretable word embeddings via non-negative constraints. However, NMF methods suffer from scale and memory issue because they have to maintain a global matrix for learning. To alleviate this challenge, we propose online learning of interpretable word embeddings from streaming text data. Experiments show that our model consistently outperforms the state-of-the-art word embedding methods in both representation ability and interpretability. The source code of this paper can be obtained from http://github.com/skTim/OIWE.
CITATION STYLE
Luo, H., Luan, Z., Huanbo, L., & Sun, M. (2015). Online learning of interpretable word embeddings. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 1687–1692). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1196
Mendeley helps you to discover research relevant for your work.