Improving word vector model with part-of-speech and dependency grammar information

7Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Part-of-speech (POS) and dependency grammar (DG) are the basic components of natural language processing. However, current word vector models have not made full use of both POS information and DG information, and hence the models' performances are limited to some extent. The authors first put forward the concept of POS vector, and then, based on continuous bag-of-words (CBOW), constructed four models: CBOW + P, CBOW + PW, CBOW + G, and CBOW + G + P to incorporate POS information and DG information into word vectors. The CBOW + P and CBOW + PW models are based on POS tagging, the CBOW + G model is based on DG parsing, and the CBOW + G + P model is based on POS tagging and DG parsing. POS information is integrated into the training process of word vectors through the POS vector to solve the problem of the POS similarity being difficult to measure. The POS vector correlation coefficient and distance weighting function are used to train the POS vector as well as the word vector. DG information is used to correct the information loss caused by fixed context windows. Dependency relations weight is used to measure the difference of dependency relations. Experiments demonstrated the superior performance of their models while the time complexity is still kept the same as the base model of CBOW.

Cite

CITATION STYLE

APA

Deng, C., Lai, G., & Deng, H. (2020). Improving word vector model with part-of-speech and dependency grammar information. CAAI Transactions on Intelligence Technology, 5(4), 260–267. https://doi.org/10.1049/trit.2020.0055

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free