This paper investigates head pose estimation problem which is considered as front-end preprocessing for improving multi-view human face recognition. We propose a computational model for perceiving head pose based on neurophysiological plausible invariance representation. In order to obtain the invariance representation bases or facial multi-view bases, a learning algorithm is derived for training the linear representation model. Then the facial multi-view bases are used to construct the computational model for head pose perception. The measure for head pose perception is introduced that the final-layered winner neuron gives the resulting head pose, if its connected pre-layer has the most firing neurons. Computer simulation results and comparisons show that the proposed model achieves satisfactory accuracy for head pose estimation of facial multi-view images in the CAS-PEAL face database.
CITATION STYLE
Yang, W., & Zhang, L. (2008). Head Pose Perception Based on Invariance Representation. In Autonomous Systems – Self-Organization, Management, and Control (pp. 1–10). Springer Netherlands. https://doi.org/10.1007/978-1-4020-8889-6_1
Mendeley helps you to discover research relevant for your work.