Deep learning 2D and 3D optical sectioning microscopy using cross-modality Pix2Pix cGAN image translation

  • Zhuge H
  • Summa B
  • Hamm J
  • et al.
14Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Structured illumination microscopy (SIM) reconstructs optically-sectioned images of a sample from multiple spatially-patterned wide-field images, but the traditional single non-patterned wide-field images are more inexpensively obtained since they do not require generation of specialized illumination patterns. In this work, we translated wide-field fluorescence microscopy images to optically-sectioned SIM images by a Pix2Pix conditional generative adversarial network (cGAN). Our model shows the capability of both 2D cross-modality image translation from wide-field images to optical sections, and further demonstrates potential to recover 3D optically-sectioned volumes from wide-field image stacks. The utility of the model was tested on a variety of samples including fluorescent beads and fresh human tissue samples.

Cite

CITATION STYLE

APA

Zhuge, H., Summa, B., Hamm, J., & Brown, J. Q. (2021). Deep learning 2D and 3D optical sectioning microscopy using cross-modality Pix2Pix cGAN image translation. Biomedical Optics Express, 12(12), 7526. https://doi.org/10.1364/boe.439894

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free