Unsupervised Domain Adaptation of ConvNets for Medical Image Segmentation via Adversarial Learning

7Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep convolutional networks (ConvNets) have achieved the state-of-the-art performance and become the de facto standard for solving a wide variety of medical image analysis tasks. However, the learned models tend to present degraded performance when being applied to a new target domain, which is different from the source domain where the model is trained on. This chapter presents unsupervised domain adaptation methods using adversarial learning, to generalize the ConvNets for medical image segmentation tasks. Specifically, we present solutions from two different perspectives, i.e., feature-level adaptation and pixel-level adaptation. The first is to utilize feature alignment in latent space, and has been applied to cross-modality (MRI/CT) cardiac image segmentation. The second is to use image-to-image transformation in appearance space, and has been applied to cross-cohort X-ray images for lung segmentation. Experimental results have validated the effectiveness of these unsupervised domain adaptation methods with promising performance on the challenging task.

Cite

CITATION STYLE

APA

Dou, Q., Chen, C., Ouyang, C., Chen, H., & Heng, P. A. (2019). Unsupervised Domain Adaptation of ConvNets for Medical Image Segmentation via Adversarial Learning. In Advances in Computer Vision and Pattern Recognition (pp. 93–115). Springer London. https://doi.org/10.1007/978-3-030-13969-8_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free