Attention-based multi-level feature fusion for named entity recognition

21Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Named entity recognition (NER) is a fundamental task in the natural language processing (NLP) area. Recently, representation learning methods (e.g., character embedding and word embedding) have achieved promising recognition results. However, existing models only consider partial features derived from words or characters while failing to integrate semantic and syntactic information (e.g., capitalization, inter-word relations, keywords, lexical phrases, etc.) from multi-level perspectives. Intuitively, multi-level features can be helpful when recognizing named entities from complex sentences. In this study, we propose a novel framework called attention-based multi-level feature fusion (AMFF), which is used to capture the multi-level features from different perspectives to improve NER. Our model consists of four components to respectively capture the local character-level, global character-level, local word-level, and global word-level features, which are then fed into a BiLSTM-CRF network for the final sequence labeling. Extensive experimental results on four benchmark datasets show that our proposed model outperforms a set of state-of-the-art baselines.

Cite

CITATION STYLE

APA

Yang, Z., Chen, H., Zhang, J., Ma, J., & Chang, Y. (2020). Attention-based multi-level feature fusion for named entity recognition. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3594–3600). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/497

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free