Radiological Reports Improve Pre-training for Localized Imaging Tasks on Chest X-Rays

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Self-supervised pre-training on unlabeled images has shown promising results in the medical domain. Recently, methods using text-supervision from companion text like radiological reports improved upon these results even further. However, most works in the medical domain focus on image classification downstream tasks and do not study more localized tasks like semantic segmentation or object detection. We therefore propose a novel evaluation framework consisting of 18 localized tasks, including semantic segmentation and object detection, on five public chest radiography datasets. Using our proposed evaluation framework, we study the effectiveness of existing text-supervised methods and compare them with image-only self-supervised methods and transfer from classification in more than 1200 evaluation runs. Our experiments show that text-supervised methods outperform all other methods on 13 out of 18 tasks making them the preferred method. In conclusion, image-only contrastive methods provide a strong baseline if no reports are available while transfer from classification, even in-domain, does not perform well in pre-training for localized tasks.

Cite

CITATION STYLE

APA

Müller, P., Kaissis, G., Zou, C., & Rueckert, D. (2022). Radiological Reports Improve Pre-training for Localized Imaging Tasks on Chest X-Rays. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13435 LNCS, pp. 647–657). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16443-9_62

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free