Multiple network fusion with low-rank representation for image-based age estimation

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Image-based age estimation is a challenging task since there are ambiguities between the apparent age of face images and the actual ages of people. Therefore, data-driven methods are popular. To improve data utilization and estimation performance, we propose an image-based age estimation method. Theoretically speaking, the key idea of the proposed method is to integrate multi-modal features of face images. In order to achieve it, we propose a multi-modal learning framework, which is called Multiple Network Fusion with Low-Rank Representation (MNF-LRR). In this process, different deep neural network (DNN) structures, such as autoencoders, Convolutional Neural Networks (CNNs), Recursive Neural Networks (RNNs), and so on, can be used to extract semantic information of facial images. The outputs of these neural networks are then represented in a low-rank feature space. In this way, feature fusion is obtained in this space, and robust multi-modal image features can be computed. An experimental evaluation is conducted on two challenging face datasets for image-based age estimation extracted from the Internet Move Database (IMDB) andWikipedia (WIKI). The results show the effectiveness of the proposed MNF-LRR.

Cite

CITATION STYLE

APA

Hong, C., Zeng, Z., Wang, X., & Zhuang, W. (2018). Multiple network fusion with low-rank representation for image-based age estimation. Applied Sciences (Switzerland), 8(9). https://doi.org/10.3390/app8091601

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free