Fluorescence microscopy technology uses fluorescent dyes to provide highly specific visualization of cell components, which plays an important role in understanding the subcellular structure. However, fluorescence microscopy has some limitations such as the risk of non-specific cross labeling in multi-labeled fluorescent staining and limited number of fluorescence labels due to spectral overlap. This paper proposes a deep learning-based fluorescence to fluorescence (Fluo-Fluo) translation method, which uses a conditional generative adversarial network to predict a fluorescence image from another fluorescence image and further realizes the multi-label fluorescent staining. The cell types used include human motor neurons, human breast cancer cells, rat cortical neurons, and rat cardiomyocytes. The effectiveness of the method is verified by successfully generating virtual fluorescence images highly similar to the true fluorescence images. This study shows that a deep neural network can implement Fluo-Fluo translation and describe the localization relationship between subcellular structures labeled with different fluorescent markers. The proposed Fluo-Fluo method can avoid non-specific cross labeling in multi-label fluorescence staining and is free from spectral overlaps. In theory, an unlimited number of fluorescence images can be predicted from a single fluorescence image to characterize cells. © 2022 Chinese Optics Letters
CITATION STYLE
Jiang, Z., Li, B., Tran, T. N. H. T., Jiang, J., Liu, X., & Ta, D. (2022). Fluo-Fluo translation based on deep learning. Chinese Optics Letters, 20(3), 031701. https://doi.org/10.3788/col202220.031701
Mendeley helps you to discover research relevant for your work.