Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body FDG-PET/CT

28Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Computational decision support systems could provide clinical value in whole-body FDG-PET/CT workflows. However, limited availability of labeled data combined with the large size of PET/CT imaging exams make it challenging to apply existing supervised machine learning systems. Leveraging recent advancements in natural language processing, we describe a weak supervision framework that extracts imperfect, yet highly granular, regional abnormality labels from free-text radiology reports. Our framework automatically labels each region in a custom ontology of anatomical regions, providing a structured profile of the pathologies in each imaging exam. Using these generated labels, we then train an attention-based, multi-task CNN architecture to detect and estimate the location of abnormalities in whole-body scans. We demonstrate empirically that our multi-task representation is critical for strong performance on rare abnormalities with limited training data. The representation also contributes to more accurate mortality prediction from imaging data, suggesting the potential utility of our framework beyond abnormality detection and location estimation.

Cite

CITATION STYLE

APA

Eyuboglu, S., Angus, G., Patel, B. N., Pareek, A., Davidzon, G., Long, J., … Lungren, M. P. (2021). Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body FDG-PET/CT. Nature Communications, 12(1). https://doi.org/10.1038/s41467-021-22018-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free