In this work we describe a novel one-shot face recognition setup. Instead of using a 3D scanner to reconstruct the face, we acquire a single photo of the face of a person while a rectangular pattern is been projected over it. Using this unique image, it is possible to extract 3D low-level geometrical features without the explicit 3D reconstruction. To handle expression variations and occlusions that may occur (e.g. wearing a scarf or a bonnet), we extract information just from the eyes-forehead and nose regions which tend to be less influenced by facial expressions. Once features are extracted, SVM hyper-planes are obtained from each subject on the database (one vs all approach), then new instances can be classified according to its distance to each of those hyper-planes. The advantage of our method with respect to other ones published in the literature, is that we do not need and explicit 3D reconstruction. Experiments with the Texas 3D Database and with new acquired data are presented, which shows the potential of the presented framework to handle different illumination conditions, pose and facial expressions.
CITATION STYLE
Matías di Martino, J., Fernández, A., & Ferrari, J. (2015). One-shot 3D-gradient method applied to face recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9423, pp. 176–183). Springer Verlag. https://doi.org/10.1007/978-3-319-25751-8_22
Mendeley helps you to discover research relevant for your work.