Synthetic Training with Generative Adversarial Networks for Segmentation of Microscopies

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Medical imaging is often burdened with small available annotated data. In case of supervised deep learning algorithms a large amount of data is needed. One common strategy is to augment the given dataset for increasing the amount of training data. Recent researches show that the generation of synthetic images is a possible strategy to expand datasets. Especially, generative adversarial networks (GAN)s are promising candidates for generating new annotated training images. This work combines recent architectures of Generative Adversarial Networks in one pipeline to generate medical original and segmented image pairs for semantic segmentation. Results of training a U-Net with incorporated synthetic images as addition to common data augmentation are showing a performance boost compared to training without synthetic images from 77.99% to 80.23% average Jaccard Index.

Cite

CITATION STYLE

APA

Krauth, J., Gerlach, S., Marzahl, C., Voigt, J., & Handels, H. (2019). Synthetic Training with Generative Adversarial Networks for Segmentation of Microscopies. In Informatik aktuell (pp. 37–42). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-658-25326-4_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free