We propose a novel method for 3D image segmentation, where a Bayesian formulation, based on joint prior knowledge of the shape and the image gray levels, along with information derived from the input image, is employed. Our method is motivated by the observation that the shape of the object and the gray level variation in an image have consistent relations that provide configurations and context that aid in segmentation. We define a Maximum A Posteriori(MAP) estimation model using the joint prior information of the shape and image gray levels to realize image segmentation. We introduce a representation for the joint density function of the object and the image gray level values, and define joint probability distribution over the variations of object shape and the gray levels contained in a set of training images. By estimating the MAP shape of the object, we formulate the shape-appearance model in terms of level set function as opposed to landmark points of the shape. We found the algorithm to be robust to noise, able to handle multidimensional data, and avoiding the need for point correspondences during the training phase. Results and validation from various experiments on 2D/3D medical images are demonstrated. © Springer-Verlag Berlin Heidelberg 2003.
CITATION STYLE
Yang, J., & Duncan, J. S. (2003). 3D image segmentation of deformable objects with shape-appearance joint prior models. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2878, 573–580. https://doi.org/10.1007/978-3-540-39899-8_71
Mendeley helps you to discover research relevant for your work.