Cross-modality anatomical landmark detection using histograms of unsigned gradient orientations and atlas location autocontext

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A proof of concept is presented for cross-modality anatomical landmark detection using histograms of unsigned gradient orientations (HUGO) as machine learning image features. This has utility since an existing algorithm trained on data from one modality may be applied to data of a different modality, or data from multiple modalities may be pooled to train one modality-independent algorithm. Landmark detection is performed using a random forest trained on HUGO features and atlas location autocontext features. Three-way cross-modality detection of 20 landmarks is demonstrated in diverse cohorts of CT, MRI T1 and MRI T2 scans of the head. Each cohort is made up of 40 training and 20 test scans, making 180 scans in total. A cross-modality mean landmark error of 5.27mm is achieved, compared to single-modality error of 4.07 mm.

Cite

CITATION STYLE

APA

O’Neil, A., Dabbah, M., & Poole, I. (2016). Cross-modality anatomical landmark detection using histograms of unsigned gradient orientations and atlas location autocontext. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10019 LNCS, pp. 139–146). Springer Verlag. https://doi.org/10.1007/978-3-319-47157-0_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free