Multi-modal vertebra segmentation from MR dixon for hybrid whole-body PET/MR

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, a novel model-based segmentation of the vertebrae is introduced that uses multi-modal image features from Dixon MR images (i.e. water/fat separated).Our primary application is the segmentation of the bony anatomy for the generation of attenuationmaps in hybrid PET/MR imaging systems. The focus of thiswork is on the geometric accuracy of the segmentation fromMR. From groundtruth structure delineations on training data sets, image features for a model-based segmentation are trained on both the water and fat images from the Dixon series. For the actual segmentation, both features are used simultaneously to improve both robustness and accuracy compared to single image segmentations. The method is validated on 25 patients by comparing the results to semi-automatically generated ground truth annotations. A mean surface distance error of 1.69 mm over all vertebrae is achieved, leading to an improvement of up to 41% compared to using a single image alone.

Cite

CITATION STYLE

APA

Buerger, C., Peters, J., Waechter-Stehle, I., Weber, F. M., Klinder, T., & Renisch, S. (2014). Multi-modal vertebra segmentation from MR dixon for hybrid whole-body PET/MR. Lecture Notes in Computational Vision and Biomechanics, 17, 159–171. https://doi.org/10.1007/978-3-319-07269-2_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free