Measuring External Face Appearance for Face Classification

  • Masip D
  • Lapedriza A
  • Vitri J
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.

Abstract

Face classification can be defined as the problem of assigning a predefined label to an image or subpart of an image that contains one ore more faces. This definition comprises many sub disciplines in the visual pattern recognition field: (i) face detection, where the goal is to detect the presence of a face on an image, (ii) face recognition, where we assign an identifier label to the detected face, (iii) face verification, where the identity of the subject is given, and we should assure its truthfulness and (iv) gender recognition where the label male or female is assigned to each face image. The information source of a facial image can divided in two sets, depending on the zone of the face. The internal information is composed by the eyes, nose and mouth, while the external features are the regions of the hair, forehead, both laterals, ears, jaw line and chin. Traditionally, face recognition algorithms have used only the internal information of face images for classification purposes since these features can be easily extracted. In fact, most of these algorithms use the aligned thumbnails as an input for some feature extraction process that yields a final feature set used to train the classifier. Classic examples of this approach are the eigenfaces technique (Turk & Pentland, 1991), or the use of Fisher Linear Discriminant Analysis (Hespanha Belhumeur & Kriegman, 1997). Moreover, in the face classification field, there are a lot of security related applications where the reliability obtained by the internal features is essential: notice that the external information is more variable and easier to imitate. For this reason, the use of external features for these securityrelated tasks has often been ignored, given their changing nature. However, with the advances of technology in chip integration, small embedded computers are more integrated in our everyday life, favouring the appearance of new applications not directly related to security dealing with face classification, where the users do not make specific efforts to mislead the classifier. Typical examples are embedded camera-devices for human userfriendly interfaces, user profiling, or reactive marketing. In these cases we consider the external features as an extra source of information for improving the accuracies obtained using only internal features. Furthermore, notice that this consideration can be specially beneficial in natural and uncontrolled environments, where usually artefacts such as strong local illumination changes or partial occlusions difficult the classification task. The use of external features has been seldom explored in computational face classification. Although there exists a plethora of methods to find the center pixel of each eye in order to put in correspondence each face image, the external regions are more difficult to align given that: • External information does not have the same size in different persons. The hair volume can differ considerably between subjects. Pixel values at certain position do not mean the same depending on the sample. • There is a lack of alignment on the features, given that there are no points of reference between samples from different subjects, or even between the same subject with different hairstyle. In this context, the main motivation of this chapter is to provide a set of techniques that allow an efficient extraction of the external features of facial images. Commonly the extraction of internal information is faced using bottom-up techniques. In the case of external features, this strategy is not suitable due to the problems mentioned above. We propose a new algorithm to follow a top-down procedure to extract the external information of facial images, obtaining an aligned feature vector that can be directly used for training any standard pattern recognition classifier.

Cite

CITATION STYLE

APA

Masip, D., Lapedriza, A., & Vitri, J. (2007). Measuring External Face Appearance for Face Classification. In Face Recognition. I-Tech Education and Publishing. https://doi.org/10.5772/4842

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free