Multimodal Graph-based Transformer Framework for Biomedical Relation Extraction

3Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

The recent advancement of pre-trained Transformer models has propelled the development of effective text mining models across various biomedical tasks. However, these models are primarily learned on the textual data and often lack the domain knowledge of the entities to capture the context beyond the sentence. In this study, we introduced a novel framework that enables the model to learn multi-omnics biological information about entities (proteins) with the help of additional multi-modal cues like molecular structure. Towards this, rather developing modality-specific architectures, we devise a generalized and optimized graph based multi-modal learning mechanism that utilizes the GraphBERT model to encode the textual and molecular structure information and exploit the underlying features of various modalities to enable the end-to-end learning. We evaluated our proposed method on Protein-Protein Interaction task from the biomedical corpus, where our proposed generalized approach is observed to be benefited by the additional domain-specific modality.

Cite

CITATION STYLE

APA

Pingali, S., Yadav, S., Dutta, P., & Saha, S. (2021). Multimodal Graph-based Transformer Framework for Biomedical Relation Extraction. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 3741–3747). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.328

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free