Facial landmark detection in images obtained under varying acquisition conditions is a challenging problem. In this paper, we present a personalized landmark localization method that leverages information available from 2D/3D gallery data. To realize a robust correspondence between gallery and probe key points, we present several innovative solutions, including: (i) a hierarchical DAISY descriptor that encodes larger contextual information, (ii) a Data-Driven Sample Consensus (DDSAC) algorithm that leverages the image information to reduce the number of required iterations for robust transform estimation, and (iii) a 2D/3D gallery pre-processing step to build personalized landmark metadata (i.e., local descriptors and a 3D landmark model). We validate our approach on the Multi-PIE and UHDB14 databases, and by comparing our results with those obtained using two existing methods. © 2011 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Zeng, Z., Fang, T., Shah, S. K., & Kakadiaris, I. A. (2011). Personalized 3D-aided 2D facial landmark localization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6493 LNCS, pp. 633–646). https://doi.org/10.1007/978-3-642-19309-5_49
Mendeley helps you to discover research relevant for your work.