Robust medical images segmentation using learned shape and appearance models

11Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a novel parametric deformable model controlled by shape and visual appearance priors learned from a training subset of co-aligned medical images of goal objects. The shape prior is derived from a linear combination of vectors of distances between the training boundaries and their common centroid. The appearance prior considers gray levels within each training boundary as a sample of a Markov-Gibbs random field with pairwise interaction. Spatially homogeneous interaction geometry and Gibbs potentials are analytically estimated from the training data. To accurately separate a goal object from an arbitrary background, empirical marginal gray level distributions inside and outside of the boundary are modeled with adaptive linear combinations of discrete Gaussians (LCDG). Due to the analytical shape and appearance priors and a simple Expectation-Maximization procedure for getting the object and background LCDG, our segmentation is considerably faster than with most of the known geometric and parametric models. Experiments with various goal images confirm the robustness, accuracy, and speed of our approach. © 2009 Springer-Verlag.

Cite

CITATION STYLE

APA

El-Baz, A., & Gimel’farb, G. (2009). Robust medical images segmentation using learned shape and appearance models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5761 LNCS, pp. 281–288). https://doi.org/10.1007/978-3-642-04268-3_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free