Blur-Robust Face Recognition via Transformation Learning

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a new method for recognizing faces degraded by blur using transformation learning on the image feature. The basic idea is to transform both the sharp images and blurred images to a same feature subspace by the method of multidimensional scaling. Different from the method of finding blur-invariant descriptors, our method learns the transformation which both preserves the manifold structure of the original shape images and, at the same time, enhances the class separability, resulting in a wide applications to various descriptors. Furthermore, we combine our method with subspace-based point spread function (PSF) estimation method to handle cases of unknown blur degree, by applying the feature transformation corresponding to the best matched PSF, where the transformation for each PSF is learned in the training stage. Experimental results on the FERET database show the proposed method achieve comparable performance against the state-of-the-art blur-invariant face recognition methods, such as LPQ and FADEIN.

Cite

CITATION STYLE

APA

Li, J., Zhang, C., Hu, J., & Deng, W. (2015). Blur-Robust Face Recognition via Transformation Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9010, pp. 15–29). Springer Verlag. https://doi.org/10.1007/978-3-319-16634-6_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free