Towards Better Entity Linking with Multi-View Enhanced Distillation

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Dense retrieval is widely used for entity linking to retrieve entities from large-scale knowledge bases. Mainstream techniques are based on a dual-encoder framework, which encodes mentions and entities independently and calculates their relevances via rough interaction metrics, resulting in difficulty in explicitly modeling multiple mention-relevant parts within entities to match divergent mentions. Aiming at learning entity representations that can match divergent mentions, this paper proposes a Multi-View Enhanced Distillation (MVD) framework, which can effectively transfer knowledge of multiple fine-grained and mention-relevant parts within entities from cross-encoders to dual-encoders. Each entity is split into multiple views to avoid irrelevant information being over-squashed into the mention-relevant view. We further design cross-alignment and self-alignment mechanisms for this framework to facilitate fine-grained knowledge distillation from the teacher model to the student model. Meanwhile, we reserve a global-view that embeds the entity as a whole to prevent dispersal of uniform information. Experiments show our method achieves state-of-the-art performance on several entity linking benchmarks.

Cite

CITATION STYLE

APA

Liu, Y., Tian, Y., Lian, J., Wang, X., Cao, Y., Fang, F., … Zhang, Q. (2023). Towards Better Entity Linking with Multi-View Enhanced Distillation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 9729–9743). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.542

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free