Named Entity Recognition for Entity Linking: What Works and What's Next

29Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.

Abstract

Entity Linking (EL) systems have achieved impressive results on standard benchmarks, mainly thanks to the contextualized representations provided by recent pretrained language models. However, such systems still require massive amounts of data - millions of labeled examples - to perform at their best, with training times that often exceed several days, especially when limited computational resources are available. In this paper, we look at how Named Entity Recognition (NER) can be exploited to narrow the gap between EL systems trained on high and low amounts of labeled data. More specifically, we show how and to what extent an EL system can benefit from NER to enhance its entity representations, improve candidate selection, select more effective negative samples and enforce hard and soft constraints on its output entities. We release our software - code and model checkpoints - at https://github. com/Babelscape/ner4el.

Cite

CITATION STYLE

APA

Tedeschi, S., Conia, S., Cecconi, F., & Navigli, R. (2021). Named Entity Recognition for Entity Linking: What Works and What’s Next. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 2584–2596). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.220

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free