Template-based multimodal joint generative model of brain data

20Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The advent of large of multi-modal imaging databases opens up the opportunity to learn how local intensity patterns covariate between multiple modalities. These models can then be used to describe expected intensities in an unseen image modalities given one or multiple observations, or to detect deviations (e.g. pathology) from the expected intensity patterns. In this work, we propose a template-based multi-modal generative mixture-model of imaging data and apply it to the problems of inlier/outlier pattern classification and image synthesis. Results on synthetic and patient data demonstrate that the proposed method is able to synthesise unseen data and accurately localise pathological regions, even in the presence of large abnormalities. It also demonstrates that the proposed model can provide accurate and uncertainty-aware intensity estimates of expected imaging patterns.

Cite

CITATION STYLE

APA

Cardoso, M. J., Sudre, C. H., Modat, M., & Ourselin, S. (2015). Template-based multimodal joint generative model of brain data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9123, pp. 17–29). Springer Verlag. https://doi.org/10.1007/978-3-319-19992-4_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free