A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization

15Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task. © 2012 Rajaei et al.

Cite

CITATION STYLE

APA

Rajaei, K., Khaligh-Razavi, S. M., Ghodrati, M., Ebrahimpour, R., & Abadi, M. E. S. A. (2012). A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization. PLoS ONE, 7(6). https://doi.org/10.1371/journal.pone.0038478

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free