Recognizing people from dynamic and static faces and bodies: Dissecting identity with a fusion approach

Citations of this article
Mendeley users who have this article in their library.


The goal of this study was to evaluate human accuracy at identifying people from static and dynamic presentations of faces and bodies. Participants matched identity in pairs of videos depicting people in motion (walking or conversing) and in "best" static images extracted from the videos. The type of information presented to observers was varied to include the face and body, the face-only, and the body-only. Identification performance was best when people viewed the face and body in motion. There was an advantage for dynamic over static stimuli, but only for conditions that included the body. Control experiments with multiple-static images indicated that some of the motion advantages we obtained were due to seeing multiple images of the person, rather than to the motion, per se. To computationally assess the contribution of different types of information for identification, we fused the identity judgments from observers in different conditions using a statistical learning algorithm trained to optimize identification accuracy. This fusion achieved perfect performance. The condition weights that resulted suggest that static displays encourage reliance on the face for recognition, whereas dynamic displays seem to direct attention more equitably across the body and face. © 2010 Elsevier Ltd.

Author supplied keywords




O’Toole, A. J., Jonathon Phillips, P., Weimer, S., Roark, D. A., Ayyad, J., Barwick, R., & Dunlop, J. (2011). Recognizing people from dynamic and static faces and bodies: Dissecting identity with a fusion approach. Vision Research, 51(1), 74–83.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free