Enhancing Unsupervised Domain Adaptation via Semantic Similarity Constraint for Medical Image Segmentation

8Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

This work proposes a novel unsupervised cross-modality adaptive segmentation method for medical images to tackle the performance degradation caused by the severe domain shift when neural networks are being deployed to unseen modalities. The proposed method is an end-2-end framework, which conducts appearance transformation via a domain-shared shallow content encoder and two domain-specific decoders. The feature extracted from the encoder is directly enhanced to be more domain-invariant by a similarity learning task using the proposed Semantic Similarity Mining (SSM) module which has a strong help of domain adaptation. The domain-invariant latent feature is then fused into the target domain segmentation sub-network, trained using the original target domain images and the translated target images from the source domain in the framework of adversarial training. The adversarial training is effective to narrow the remaining gap between domains in semantic space after appearance alignment. Experimental results on two challenging datasets demonstrate that our method outperforms the state-of-the-art approaches.

Cite

CITATION STYLE

APA

Hu, T., Sun, S., Zhao, J., & Shi, D. (2022). Enhancing Unsupervised Domain Adaptation via Semantic Similarity Constraint for Medical Image Segmentation. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3071–3077). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/426

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free