Abstract
Learning word embeddings has received a significant amount of attention recently. Often, word embeddings are learned in an unsupervised manner from a large collection of text. The genre of the text typically plays an important role in the effectiveness of the resulting embeddings. How to effectively train word embedding models using data from different domains remains a problem that is underexplored. In this paper, we present a simple yet effective method for learning word embeddings based on text from different domains. We demonstrate the effectiveness of our approach through extensive experiments on various down-stream NLP tasks.
Cite
CITATION STYLE
Yang, W., Lu, W., & Zheng, V. W. (2017). A simple regularization-based algorithm for learning cross-domain word embeddings. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2898–2904). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1312
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.