When we look at a face, we readily perceive that person's gender, expression, identity, age, and attractiveness. Perceivers as well as scientists have hitherto had little success in articulating just what information we are employing to achieve these subjectively immediate and effortless classifications. We describe here a method that estimates that information. Observers classified faces in high levels of visual noise as male or female (in a gender task), happy/unhappy (in an expression task), or Tom Cruise/John Travolta (in an individuation task). They were unaware that the underlying face (which was midway between each of the classes) was identical throughout a task, with only the noise rendering it more like one category value or the other. The difference between the average of noise patterns for each classification decision provided a linear estimate of the information mediating these classifications. When the noise was combined with the underlying face, the resultant images appeared to be excellent prototypes of their respective classes. Other methods of estimating the information employed in complex classification have relied on judgments of exemplars of a class or tests of experimenter‐defined hypotheses about the class information. Our method allows an estimate, however subtle, of what is in the subject's (rather than the experimenter's) head.
CITATION STYLE
Mangini, M. C., & Biederman, I. (2004). Making the ineffable explicit: estimating the information employed for face classifications. Cognitive Science, 28(2), 209–226. https://doi.org/10.1207/s15516709cog2802_4
Mendeley helps you to discover research relevant for your work.