Illumination normalization for robust face recognition using discrete wavelet transform

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we introduce an illumination normalization approach within frequency domain by utilizing Discrete Wavelet Transform (DWT) as a transformation function in order to suppress illumination variations and simultaneously amplify facial feature such as eyeball, eyebrow, nose, and mouth. The basic ideas are: 1) transform a face image from spatial domain into frequency domain and then obtain two major components, approximate coefficient (Low frequency) and detail coefficient (High frequency) separately 2) remove total variation in an image by adopting Total Variation Quotient Image (TVQI) or Logarithmic Total Variation (LTV) 3) amplify facial features, which are the significant key for face classification, by adopting Gaussian derivatives and Morphological operators respectively. The efficiency of our proposed approach is evaluated based on a public face database, Yale Face Database B, and its extend version, Extend Yale Face Database B. Our experimental results are demonstrated that the proposed approach archives high recognition rate even though only single image per person was used as the training set. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Petpon, A., & Srisuk, S. (2010). Illumination normalization for robust face recognition using discrete wavelet transform. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6455 LNCS, pp. 69–78). https://doi.org/10.1007/978-3-642-17277-9_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free