Simultaneously Self-Attending to Text and Entities for Knowledge-Informed Text Representations

1Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pre-trained language models have emerged as highly successful methods for learning good text representations. However, the amount of structured knowledge retained in such models, and how (if at all) it can be extracted, remains an open question. In this work, we aim at directly learning text representations which leverage structured knowledge about entities mentioned in the text. This can be particularly beneficial for downstream tasks which are knowledge-intensive. Our approach utilizes self-attention between words in the text and knowledge graph (KG) entities mentioned in the text. While existing methods require entity-linked data for pre-training, we train using a mention-span masking objective and a candidate ranking objective – which doesn’t require any entity-links and only assumes access to an alias table for retrieving candidates, enabling large-scale pre-training. We show that the proposed model learns knowledge-informed text representations that yield improvements on the downstream tasks over existing methods.

Cite

CITATION STYLE

APA

Thai, D., Thirukovalluru, R., Bansal, T., & McCallum, A. (2021). Simultaneously Self-Attending to Text and Entities for Knowledge-Informed Text Representations. In RepL4NLP 2021 - 6th Workshop on Representation Learning for NLP, Proceedings of the Workshop (pp. 241–247). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.repl4nlp-1.25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free