Label2label: Training a neural network to selectively restore cellular structures in fluorescence microscopy

5Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Immunofluorescence microscopy is routinely used to visualise the spatial distribution of proteins that dictates their cellular function. However, unspecific antibody binding often results in high cytosolic background signals, decreasing the image contrast of a target structure. Recently, convolutional neural networks (CNNs) were successfully employed for image restoration in immunofluorescence microscopy, but current methods cannot correct for those background signals. We report a new method that trains a CNN to reduce unspecific signals in immunofluorescence images; we name this method label2label (L2L). In L2L, a CNN is trained with image pairs of two non-identical labels that target the same cellular structure. We show that after L2L training a network predicts images with significantly increased contrast of a target structure, which is further improved after implementing a multiscale structural similarity loss function. Here, our results suggest that sample differences in the training data decrease hallucination effects that are observed with other methods. We further assess the performance of a cycle generative adversarial network, and show that a CNN can be trained to separate structures in superposed immunofluorescence images of two targets.

Cite

CITATION STYLE

APA

Kölln, L. S., Salem, O., Valli, J., Hansen, C. G., & McConnell, G. (2022). Label2label: Training a neural network to selectively restore cellular structures in fluorescence microscopy. Journal of Cell Science, 135(3). https://doi.org/10.1242/jcs.258994

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free