Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature.
CITATION STYLE
Ma, G., Gao, Y., Wu, G., Wu, L., & Shen, D. (2015). Non-local atlas-guided multi-channel forest learning for human brain labeling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9351, pp. 719–726). Springer Verlag. https://doi.org/10.1007/978-3-319-24574-4_86
Mendeley helps you to discover research relevant for your work.