Age estimation from a single human face image has been an important yet challenging task in computer vision and multimedia. Due to the large individual differences in human faces, including the differences in races and genders, the performance of a learning model depends largely on training data. The existing learning methods are challenged by insufficient numbers of images and poor-quality images in datasets, as well as by new low-precision data that are dissimilar to existing training data. In this paper, we propose a learning method called the cross-dataset training convolutional neural network (CDCNN), which uses a general framework for cross-dataset training in age estimation. We adopted convolutional neural networks (CNNs) with VGG-16 architectures pretrained on ImageNet and treated the age estimation problem as a classification problem. For the classification results, softmax is utilized to map the output and provide value refinement. We conducted a series of experiments on the Craniofacial Longitudinal Morphological Face Database (MORPH), Cross-Age Celebrity Dataset (CACD), and Asian Face Age Dataset (AFAD). The results show that simultaneous training on multiple datasets using additional labeled data achieves a more impressive performance when compared to training on a single, independent dataset. Our proposed cross-dataset training model achieves state-of-the-art results on both the AFAD and CACD age estimation benchmarks with great generalizability.
CITATION STYLE
Zhang, B., & Bao, Y. (2022). Cross-Dataset Learning for Age Estimation. IEEE Access, 10, 24048–24055. https://doi.org/10.1109/ACCESS.2022.3154403
Mendeley helps you to discover research relevant for your work.