This paper presents a new approach to automatic three-dimensional (3D) cephalometric annotation for diagnosis, surgical planning, and treatment evaluation. There has long been considerable demand for automated cephalometric landmarking, since manual landmarking requires considerable time and experience as well as objectivity and scrupulous error avoidance. Due to the inherent limitation of two-dimensional (2D) cephalometry and the 3D nature of surgical simulation, there is a trend away from current 2D to 3D cephalometry. Deep learning approaches to cephalometric landmarking seem highly promising, but there exist serious difficulties in handling high dimensional 3D CT data, dimension referring to the number of voxels. To address this issue of dimensionality, this paper proposes a shadowed 2D image-based machine learning method which uses multiple shadowed 2D images with various lighting and view directions to capture 3D geometric cues. The proposed method using VGG-net was trained and tested using 2700 shadowed 2D images and corresponding manual landmarkings. Test data evaluation shows that our method achieved an average point-to-point error of 1.5 mm for the seven major landmarks.
CITATION STYLE
Lee, S. M., Kim, H. P., Jeon, K., Lee, S. H., & Seo, J. K. (2019). Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning. Physics in Medicine and Biology, 64(5). https://doi.org/10.1088/1361-6560/ab00c9
Mendeley helps you to discover research relevant for your work.