Personalized 3D-aided 2D facial landmark localization

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Facial landmark detection in images obtained under varying acquisition conditions is a challenging problem. In this paper, we present a personalized landmark localization method that leverages information available from 2D/3D gallery data. To realize a robust correspondence between gallery and probe key points, we present several innovative solutions, including: (i) a hierarchical DAISY descriptor that encodes larger contextual information, (ii) a Data-Driven Sample Consensus (DDSAC) algorithm that leverages the image information to reduce the number of required iterations for robust transform estimation, and (iii) a 2D/3D gallery pre-processing step to build personalized landmark metadata (i.e., local descriptors and a 3D landmark model). We validate our approach on the Multi-PIE and UHDB14 databases, and by comparing our results with those obtained using two existing methods. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Zeng, Z., Fang, T., Shah, S. K., & Kakadiaris, I. A. (2011). Personalized 3D-aided 2D facial landmark localization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6493 LNCS, pp. 633–646). https://doi.org/10.1007/978-3-642-19309-5_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free