Recent interest in distributed vector representations for words has resulted in an increased diversity of approaches, each with strengths and weaknesses. We demonstrate how diverse vector representations may be inexpensively composed into hybrid representations, effectively leveraging strengths of individual components, as evidenced by substantial improvements on a standard word analogy task. We further compare these results over different sizes of training sets and find these advantages are more pronounced when training data is limited. Finally, we explore the relative impacts of the differences in the learning methods themselves and the size of the contexts they access.
CITATION STYLE
Garten, J., Sagae, K., Ustun, V., & Dehghani, M. (2015). Combining distributed vector representations for words. In 1st Workshop on Vector Space Modeling for Natural Language Processing, VS 2015 at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2015 (pp. 95–101). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w15-1513
Mendeley helps you to discover research relevant for your work.