Traditional methods for named entity recognition (NER) require heavy feature engineering to achieve high performance. We propose a novel neural network architecture for NER that detects word features automatically without feature engineering. Our approach uses word embedding as input, feeds them into a bidirectional long short-term memory (B-LSTM) for modeling the context within a sentence, and outputs the NER results. This study extends the neural network language model through B-LSTM, which outperforms other deep neural network models in NER tasks. Experimental results show that the B-LSTM with word embedding trained on a large corpus achieves the highest F-score of 0.9247, thus outperforming state-of-the-art methods that are based on feature engineering.
CITATION STYLE
Ouyang, L., Tian, Y., Tang, H., & Zhang, B. (2017). Chinese named entity recognition based on B-LSTM neural network with additional features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10656 LNCS, pp. 269–279). Springer Verlag. https://doi.org/10.1007/978-3-319-72389-1_22
Mendeley helps you to discover research relevant for your work.