Robust face recognition with deeply normalized depth images

4Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Depth information has been proven useful for face recognition. However, existing depth-image-based face recognition methods still suffer from noisy depth values and varying poses and expressions. In this paper, we propose a novel method for normalizing facial depth images to frontal pose and neutral expression and extracting robust features from the normalized depth images. The method is implemented via two deep convolutional neural networks (DCNN), normalization network (NetN) and feature extraction network (NetF). Given a facial depth image, NetN first converts it to an HHA image, from which the 3D face is reconstructed via a DCNN. NetN then generates a pose-and-expression normalized (PEN) depth image from the reconstructed 3D face. The PEN depth image is finally passed to NetF, which extracts a robust feature representation via another DCNN for face recognition. Our preliminary evaluation results demonstrate the superiority of the proposed method in recognizing faces of arbitrary poses and expressions with depth images.

Cite

CITATION STYLE

APA

Feng, Z., & Zhao, Q. (2018). Robust face recognition with deeply normalized depth images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10996 LNCS, pp. 418–427). Springer Verlag. https://doi.org/10.1007/978-3-319-97909-0_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free