Training and evaluating improved dependency-based word embeddings

12Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Word embedding has been widely used in many natural language processing tasks. In this paper, we focus on learning word embeddings through selective higher-order relationships in sentences to improve the embeddings to be less sensitive to local context and more accurate in capturing semantic compositionality. We present a novel multi-order dependency-based strategy to composite and represent the context under several essential constraints. In order to realize selective learning from the word contexts, we automatically assign the strengths of different dependencies between co-occurred words in the stochastic gradient descent process. We evaluate and analyze our proposed approach using several direct and indirect tasks for word embeddings. Experimental results demonstrate that our embeddings are competitive to or better than state-of-the-art methods and significantly outperform other methods in terms of context stability. The output weights and representations of dependencies obtained in our embedding model conform to most of the linguistic characteristics and are valuable for many downstream tasks.

Cite

CITATION STYLE

APA

Li, C., Li, J., Song, Y., & Lin, Z. (2018). Training and evaluating improved dependency-based word embeddings. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 5836–5843). AAAI press. https://doi.org/10.1609/aaai.v32i1.12044

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free