3D image segmentation of deformable objects with shape-appearance joint prior models

14Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a novel method for 3D image segmentation, where a Bayesian formulation, based on joint prior knowledge of the shape and the image gray levels, along with information derived from the input image, is employed. Our method is motivated by the observation that the shape of the object and the gray level variation in an image have consistent relations that provide configurations and context that aid in segmentation. We define a Maximum A Posteriori(MAP) estimation model using the joint prior information of the shape and image gray levels to realize image segmentation. We introduce a representation for the joint density function of the object and the image gray level values, and define joint probability distribution over the variations of object shape and the gray levels contained in a set of training images. By estimating the MAP shape of the object, we formulate the shape-appearance model in terms of level set function as opposed to landmark points of the shape. We found the algorithm to be robust to noise, able to handle multidimensional data, and avoiding the need for point correspondences during the training phase. Results and validation from various experiments on 2D/3D medical images are demonstrated. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Yang, J., & Duncan, J. S. (2003). 3D image segmentation of deformable objects with shape-appearance joint prior models. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2878, 573–580. https://doi.org/10.1007/978-3-540-39899-8_71

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free