Self-supervised Dense Representation Learning for Live-Cell Microscopy with Time Arrow Prediction

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

State-of-the-art object detection and segmentation methods for microscopy images rely on supervised machine learning, which requires laborious manual annotation of training data. Here we present a self-supervised method based on time arrow prediction pre-training that learns dense image representations from raw, unlabeled live-cell microscopy videos. Our method builds upon the task of predicting the correct order of time-flipped image regions via a single-image feature extractor followed by a time arrow prediction head that operates on the fused features. We show that the resulting dense representations capture inherently time-asymmetric biological processes such as cell divisions on a pixel-level. We furthermore demonstrate the utility of these representations on several live-cell microscopy datasets for detection and segmentation of dividing cells, as well as for cell state classification. Our method outperforms supervised methods, particularly when only limited ground truth annotations are available as is commonly the case in practice. We provide code at https://github.com/weigertlab/tarrow.

Cite

CITATION STYLE

APA

Gallusser, B., Stieber, M., & Weigert, M. (2023). Self-supervised Dense Representation Learning for Live-Cell Microscopy with Time Arrow Prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14227 LNCS, pp. 537–547). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-43993-3_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free