Atlas-guided multi-channel forest learning for human brain labeling

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Labeling MR brain images into anatomically meaningful regions is important in quantitative brain researches. Previous works can be roughly categorized into two classes: multi-atlas and learning based labeling methods. These methods all suffer from their own limitations. For multi-atlas based methods, the label fusion step is often handcrafted based on the predefined similarity metrics between voxels in the target and atlas images. For learning based methods, the spatial correspondence information encoded in the atlases is lost since they often use only the target image appearance for classification. In this paper, we propose a novel atlas-guided multi-channel forest learning, which could effectively address the aforementioned limitations. Instead of handcrafting the label fusion step, we learn a non-linear classification forest for automatically fusing both image appearance and label information of the atlas with the image appearance of the target image. Validated on LONI-LBPA40 dataset, our method outperforms several traditional labeling approaches.

Cite

CITATION STYLE

APA

Ma, G., Gao, Y., Wu, G., Wu, L., & Shen, D. (2014). Atlas-guided multi-channel forest learning for human brain labeling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8848, pp. 97–104). Springer Verlag. https://doi.org/10.1007/978-3-319-13972-2_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free