Medical image processing with contextual style transfer

21Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.

Cite

CITATION STYLE

APA

Xu, Y., Li, Y., & Shin, B. S. (2020). Medical image processing with contextual style transfer. Human-Centric Computing and Information Sciences, 10(1). https://doi.org/10.1186/s13673-020-00251-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free