Learning-based local-to-global landmark annotation for automatic 3D cephalometry

39Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The annotation of three-dimensional (3D) cephalometric landmarks in 3D computerized tomography (CT) has become an essential part of cephalometric analysis, which is used for diagnosis, surgical planning, and treatment evaluation. The automation of 3D landmarking with high-precision remains challenging due to the limited availability of training data and the high computational burden. This paper addresses these challenges by proposing a hierarchical deep-learning method consisting of four stages: 1) a basic landmark annotator for 3D skull pose normalization, 2) a deep-learning-based coarse-to-fine landmark annotator on the midsagittal plane, 3) a low-dimensional representation of the total number of landmarks using variational autoencoder (VAE), and 4) a local-to-global landmark annotator. The implementation of the VAE allows two-dimensional-image-based 3D morphological feature learning and similarity/dissimilarity representation learning of the concatenated vectors of cephalometric landmarks. The proposed method achieves an average 3D point-to-point error of 3.63 mm for 93 cephalometric landmarks using a small number of training CT datasets. Notably, the VAE captures variations of craniofacial structural characteristics.

Cite

CITATION STYLE

APA

Yun, H. S., Jang, T. J., Lee, S. M., Lee, S. H., & Seo, J. K. (2020). Learning-based local-to-global landmark annotation for automatic 3D cephalometry. Physics in Medicine and Biology, 65(8). https://doi.org/10.1088/1361-6560/ab7a71

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free