KGNER: Improving Chinese Named Entity Recognition by BERT Infused with the Knowledge Graph

8Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Recently, the lexicon method has been proven to be effective for named entity recognition (NER). However, most existing lexicon-based methods cannot fully utilize common-sense knowledge in the knowledge graph. For example, the word embeddings pretrained by Word2vector or Glove lack better contextual semantic information usage. Hence, how to make the best of knowledge for the NER task has become a challenging and hot research topic. We propose a knowledge graph-inspired named-entity recognition (KGNER) featuring a masking and encoding method to incorporate common sense into bidirectional encoder representations from transformers (BERT). The proposed method not only preserves the original sentence semantic information but also takes advantage of the knowledge information in a more reasonable way. Subsequently, we model the temporal dependencies by taking the conditional random field (CRF) as the backend, and improve the overall performance. Experiments on four dominant datasets demonstrate that the KGNER outperforms other lexicon-based models in terms of performance.

Cite

CITATION STYLE

APA

Hu, W., He, L., Ma, H., Wang, K., & Xiao, J. (2022). KGNER: Improving Chinese Named Entity Recognition by BERT Infused with the Knowledge Graph. Applied Sciences (Switzerland), 12(15). https://doi.org/10.3390/app12157702

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free