Graph attention-based fusion of pathology images and gene expression for prediction of cancer survival

0Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Multimodal machine learning models are being developed to analyze pathology images and other modalities, such as gene expression, to gain clinical and biological insights. However, most frameworks for multimodal data fusion do not fully account for the interactions between different modalities. Here, we present an attention-based fusion architecture that integrates a graph representation of pathology images with gene expression data and concomitantly learns from the fused information to predict patient-specific survival. In our approach, pathology images are represented as undirected graphs, and their embeddings are combined with embeddings of gene expression signatures using an attention mechanism to stratify tumors by patient survival. We show that our framework improves the survival prediction of human non-small cell lung cancers, outperforming existing state-of-the-art approaches that leverage multimodal data. Our framework can facilitate spatial molecular profiling to identify tumor heterogeneity using pathology images and gene expression data, complementing results obtained from more expensive spatial transcriptomic and proteomic technologies.

Cite

CITATION STYLE

APA

Zheng, Y., Conrad, R. D., Green, E. J., Burks, E. J., Betke, M., Beane, J. E., & Kolachalama, V. B. (2024). Graph attention-based fusion of pathology images and gene expression for prediction of cancer survival. IEEE Transactions on Medical Imaging. https://doi.org/10.1109/TMI.2024.3386108

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free