Simultaneously linking entities and extracting relations from biomedical text without mention-level supervision

6Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

Understanding the meaning of text often involves reasoning about entities and their relationships. This requires identifying textual mentions of entities, linking them to a canonical concept, and discerning their relationships. These tasks are nearly always viewed as separate components within a pipeline, each requiring a distinct model and training data. While relation extraction can often be trained with readily available weak or distant supervision, entity linkers typically require expensive mention-level supervision which is not available in many domains. Instead, we propose a model which is trained to simultaneously produce entity linking and relation decisions while requiring no mention-level annotations. This approach avoids cascading errors that arise from pipelined methods and more accurately predicts entity relationships from text. We show that our model outperforms a state-of-The art entity linking and relation extraction pipeline on two biomedical datasets and can drastically improve the overall recall of the system.

Cite

CITATION STYLE

APA

Bansal, T., Verga, P., Choudhary, N., & McCallum, A. (2020). Simultaneously linking entities and extracting relations from biomedical text without mention-level supervision. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 7407–7414). AAAI press. https://doi.org/10.1609/aaai.v34i05.6236

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free