Named entity recognition (NER) is one of the widely studied natural language processing tasks in recent years. Conventional solutions treat the NER as a sequence labeling problem, but these approaches cannot handle nested NER. This is due to the fact that nested NER refers to the case where one entity contains another entity and it is not feasible to tag each token with a single tag. The pyramid model stacks L flat NER layers for prediction, which subtly enumerates all spans with length less than or equal to L. However, the original model introduces a block consisting of a convolutional layer and a bidirectional long short-term memory (Bi-LSTM) layer as the decoder, which does not consider the dependency between adjacent inputs and the Bi-LSTM cannot perform parallel computation on sequential inputs. For the purpose of improving performance and reducing the forward computation, we propose a Multi-Head Adjacent Attention-based Pyramid Layered model. In addition, when constructing a pyramid structure for span representation, the information of the intermediate words has more proportion than words on the two sides. To address this imbalance in the span representation, we fuse the output of the attention layer with the features of head and tail words when doing classification. We conducted experiments on nested NER datasets such as GENIA, SciERC, and ADE to validate the effectiveness of our proposed model.
CITATION STYLE
Cui, S., & Joe, I. (2023). A multi-head adjacent attention-based pyramid layered model for nested named entity recognition. Neural Computing and Applications, 35(3), 2561–2574. https://doi.org/10.1007/s00521-022-07747-8
Mendeley helps you to discover research relevant for your work.