Brain Imaging Generation with Latent Diffusion Models

52Citations
Citations of this article
160Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks have brought remarkable breakthroughs in medical image analysis. However, due to their data-hungry nature, the modest dataset sizes in medical imaging projects might be hindering their full potential. Generating synthetic data provides a promising alternative, allowing to complement training datasets and conducting medical image research at a larger scale. Diffusion models recently have caught the attention of the computer vision community by producing photorealistic synthetic images. In this study, we explore using Latent Diffusion Models to generate synthetic images from high-resolution 3D brain images. We used T1w MRI images from the UK Biobank dataset (N = 31,740) to train our models to learn about the probabilistic distribution of brain images, conditioned on covariates, such as age, sex, and brain structure volumes. We found that our models created realistic data, and we could use the conditioning variables to control the data generation effectively. Besides that, we created a synthetic dataset with 100,000 brain images and made it openly available to the scientific community.

Cite

CITATION STYLE

APA

Pinaya, W. H. L., Tudosiu, P. D., Dafflon, J., Da Costa, P. F., Fernandez, V., Nachev, P., … Cardoso, M. J. (2022). Brain Imaging Generation with Latent Diffusion Models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13609 LNCS, pp. 117–126). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-18576-2_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free