Sex estimation from maxillofacial radiographs using a deep learning approach

7Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The purpose of this study was to construct deep learning models for more efficient and reliable sex estimation. Two deep learning models, VGG16 and DenseNet-121, were used in this retrospective study. In total, 600 lateral cephalograms were analyzed. A saliency map was generated by gradient-weighted class activation mapping for each output. The two deep learning models achieved high values in each performance metric according to accuracy, sensitivity (recall), precision, F1 score, and areas under the receiver operating characteristic curve. Both models showed substantial differences in the positions indicated in saliency maps for male and female images. The positions in saliency maps also differed between VGG16 and DenseNet-121, regardless of sex. This analysis of our proposed system suggested that sex estimation from lateral cephalograms can be achieved with high accuracy using deep learning.

Cite

CITATION STYLE

APA

Hase, H., Mine, Y., Okazaki, S., Yoshimi, Y., Ito, S., Peng, T. Y., … Murayama, T. (2024). Sex estimation from maxillofacial radiographs using a deep learning approach. Dental Materials Journal, 43(3), 394–399. https://doi.org/10.4012/dmj.2023-253

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free