A probabilistic, non-parametric framework for inter-modality label fusion

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multi-atlas techniques are commonplace in medical image segmentation due to their high performance and ease of implementation. Locally weighting the contributions from the different atlases in the label fusion process can improve the quality of the segmentation. However, how to define these weights in a principled way in inter-modality scenarios remains an open problem. Here we propose a label fusion scheme that does not require voxel intensity consistency between the atlases and the target image to segment. The method is based on a generative model of image data in which each intensity in the atlases has an associated conditional distribution of corresponding intensities in the target. The segmentation is computed using variational expectation maximization (VEM) in a Bayesian framework. The method was evaluated with a dataset of eight proton density weighted brain MRI scans with nine labeled structures of interest. The results show that the algorithm outperforms majority voting and a recently published inter-modality label fusion algorithm. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Iglesias, J. E., Sabuncu, M. R., & Van Leemput, K. (2013). A probabilistic, non-parametric framework for inter-modality label fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8151 LNCS, pp. 576–583). https://doi.org/10.1007/978-3-642-40760-4_72

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free