Attention-based Bi-LSTM for chinese named entity recognition

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As an integral part of deep learning, attention mechanism and bi-directional long short-term memory (Bi-LSTM) are widely used in the field of NLP (natural language processing) and their effectiveness has been well recognized. This paper adopts an attention-based Bi-LSTM approach to the question of Chinese NER (named entity recognition). With the use of word2vec, we compile vectorized dictionaries and employ Bi-LSTM models to train text vectors, with which the output eigenvectors of the attention model are multiplied. Finally, softmax is used to classify vectors in order to achieve Chinese NER. In four different configurations, our experiments describe the impact of the domain relevance of Chinese character vectors, phrase vectors, and vectorized datasets on the effectiveness of Chinese NER. The experimental results show that the standard precision (P), recall (R), and F1-score (F1) are 97.51%, 95.33%, and 96.41% respectively.

Cite

CITATION STYLE

APA

Zhang, K., Ren, W., & Zhang, Y. (2018). Attention-based Bi-LSTM for chinese named entity recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11173 LNAI, pp. 643–652). Springer Verlag. https://doi.org/10.1007/978-3-030-04015-4_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free