Multi-layer Joint Learning of Chinese Nested Named Entity Recognition Based on Self-attention Mechanism

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Nested named entity recognition attracts increasingly attentions due to their pervasiveness in general domain as well as in other specific domains. This paper proposes a multi-layer joint learning model for Chinese named entities recognition based on self-attention aggregation mechanism where a series of multi-layered sequence labeling sub-models are joined to recognize named entities in a bottom-up fashion. In order to capture entity semantic information in a lower layer, hidden units in an entity are aggregated using self-attention mechanism and further fed into the higher layer. We conduct extensive experiments using various entity aggregation methods. The results on the Chinese nested entity corpus transformed from the People’s Daily show that our model performs best among other competitive methods, implying that self-attention mechanism can effectively aggregate important semantic information in an entity.

Cite

CITATION STYLE

APA

Li, H., Xu, H., Qian, L., & Zhou, G. (2020). Multi-layer Joint Learning of Chinese Nested Named Entity Recognition Based on Self-attention Mechanism. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12431 LNAI, pp. 144–155). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60457-8_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free