Learning coupled prior shape and appearance models for segmentation

24Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a novel framework for learning a joint shape and appearance model from a large set of un-labelled training examples in arbitrary positions and orientations. The shape and intensity spaces are unified by implicitly representing shapes as "images" in the space of distance transforms. A stochastic chord-based matching algorithm is developed to align photo-realistic training examples under a common reference frame. Then dense local deformation fields, represented using the cubic B-spline based Free Form Deformations (FFD), are recovered to register the training examples in both shape and intensity spaces. Principal Component Analysis (PCA) is applied on the FFD control lattices to capture the variations in shape as well as on registered object interior textures. We show examples where we have built coupled shape and appearance prior models for the left ventricle and whole heart in short-axis cardiac tagged MR images, and used them to delineate the heart chambers in noisy, cluttered images. We also show quantitative validation on the automatic segmentation results by comparing to expert solutions. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Huang, X., Li, Z., & Metaxas, D. (2004). Learning coupled prior shape and appearance models for segmentation. In Lecture Notes in Computer Science (Vol. 3216, pp. 60–69). Springer Verlag. https://doi.org/10.1007/978-3-540-30135-6_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free