Measuring the phenotypic effect of treatments on cells through imaging assays is an efficient and powerful way of studying cell biology, and requires computational methods for transforming images into quantitative data. Here, we present an improved strategy for learning representations of treatment effects from high-throughput imaging, following a causal interpretation. We use weakly supervised learning for modeling associations between images and treatments, and show that it encodes both confounding factors and phenotypic features in the learned representation. To facilitate their separation, we constructed a large training dataset with images from five different studies to maximize experimental diversity, following insights from our causal analysis. Training a model with this dataset successfully improves downstream performance, and produces a reusable convolutional network for image-based profiling, which we call Cell Painting CNN. We evaluated our strategy on three publicly available Cell Painting datasets, and observed that the Cell Painting CNN improves performance in downstream analysis up to 30% with respect to classical features, while also being more computationally efficient.
CITATION STYLE
Moshkov, N., Bornholdt, M., Benoit, S., Smith, M., McQuin, C., Goodman, A., … Caicedo, J. C. (2024). Learning representations for image-based profiling of perturbations. Nature Communications, 15(1). https://doi.org/10.1038/s41467-024-45999-1
Mendeley helps you to discover research relevant for your work.