Comparing to learn: Surpassing imagenet pretraining on radiographs by comparing image representations

54Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In deep learning era, pretrained models play an important role in medical image analysis, in which ImageNet pretraining has been widely adopted as the best way. However, it is undeniable that there exists an obvious domain gap between natural images and medical images. To bridge this gap, we propose a new pretraining method which learns from 700k radiographs given no manual annotations. We call our method as Comparing to Learn (C2L) because it learns robust features by comparing different image representations. To verify the effectiveness of C2L, we conduct comprehensive ablation studies and evaluate it on different tasks and datasets. The experimental results on radiographs show that C2L can outperform ImageNet pretraining and previous state-of-the-art approaches significantly. Code and models are available at https://github.com/funnyzhou/C2L_MICCAI2020.

Cite

CITATION STYLE

APA

Zhou, H. Y., Yu, S., Bian, C., Hu, Y., Ma, K., & Zheng, Y. (2020). Comparing to learn: Surpassing imagenet pretraining on radiographs by comparing image representations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12261 LNCS, pp. 398–407). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59710-8_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free