Word embeddings allow natural language processing systems to share statistical information across related words. These embeddings are typically based on distributional statistics, making it difficult for them to generalize to rare or unseen words. We propose to improve word embeddings by incorporating morphological information, capturing shared sub-word features. Unlike previous work that constructs word embeddings directly from morphemes, we combine morphological and distributional information in a unified probabilistic framework, in which the word embedding is a latent variable. The morphological information provides a prior distribution on the latent word embeddings, which in turn condition a likelihood function over an observed corpus. This approach yields improvements on intrinsic word similarity evaluations, and also in the downstream task of part-of-speech tagging.
CITATION STYLE
Bhatia, P., Guthrie, R., & Eisenstein, J. (2016). Morphological priors for probabilistic neural word embeddings. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 490–500). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1047
Mendeley helps you to discover research relevant for your work.