A word vector representation based method for new words discovery in massive text

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The discovery of new words is of great significance to natural language processing for the Chinese language. In recent years, training words in a corpus into a new word vector representation with neural network model has shown a good performance in representing the original semantic relationship among words. Accordingly, the word vector representation is then introduced into the discovery of new word in Chinese text. In this work, we propose a new unsupervised method for discovering new word based on n-gram method. To that end, we first trains the words in corpus into a word vector space, and then combine some elements in the corpus as candidates for new words. Finally, the noise candidates are dropped based on the similarity between two elements in the new word vector space. By comparing to some classical unsupervised methods such as mutual Information and adjacent entropy, the experiment results show that the propose method has great advantage on performance in discovering new words.

Cite

CITATION STYLE

APA

Du, Y., Yuan, H., & Qian, Y. (2016). A word vector representation based method for new words discovery in massive text. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10102, pp. 76–88). Springer Verlag. https://doi.org/10.1007/978-3-319-50496-4_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free