Learning dynamic context augmentation for global entity linking

60Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.

Abstract

Despite of the recent success of collective entity linking (EL) methods, these “global” inference methods may yield sub-optimal results when the “all-mention coherence” assumption breaks, and often suffer from high computational cost at the inference stage, due to the complex search space. In this paper, we propose a simple yet effective solution, called Dynamic Context Augmentation (DCA), for collective EL, which requires only one pass through the mentions in a document. DCA sequentially accumulates context information to make efficient, collective inference, and can cope with different local EL models as a plug- and-enhance module. We explore both supervised and reinforcement learning strategies for learning the DCA model. Extensive experiments1 show the effectiveness of our model with different learning settings, base models, decision orders and attention mechanisms.

Cite

CITATION STYLE

APA

Yang, X., Gu, X., Lin, S., Tang, S., Zhuang, Y., Wu, F., … Ren, X. (2019). Learning dynamic context augmentation for global entity linking. In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 271–281). Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1026

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free