Image level fusion method for multimodal 2D + 3D face recognition

N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most of the existing multimodal 2D + 3D face recognition approaches do not account for the dependency between 2D and 3D representations of a face. This dependency reduces the benefit of fusion at the late-stage feature or metric level. On the other hand, it is advantageous to fuse at the early stage. We propose an image-level fusion method that explores the dependency between modalities for face recognition. Facial cues from 2D and 3D images are fused into more independent and discriminating data by finding fusion axes that pass through the most uncorrelated information in the images. Experimental results based on our face database of 1280 2D + 3D facial samples from 80 adults show that our image-level fusion approach outperforms the pixel- and metric-level fusion approaches. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Kusuma, G. P., & Chua, C. S. (2008). Image level fusion method for multimodal 2D + 3D face recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5112 LNCS, pp. 984–992). https://doi.org/10.1007/978-3-540-69812-8_98

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free