Unconstrained face alignment usually undergoes extreme deformations and severe occlusions, which likely gives rise to biased shape prediction. Most existing methods simply exploit shape structure by directly concatenating all landmarks, which leads to losses of facial details in extreme deformation regions. In this paper, we propose a relational-structural networks (RSN) approach to learn both local and global feature representation for robust face alignment. To achieve this goal, we built a structural branch network to disentangle the local geometric relationship among neighboring facial sub-regions. Moreover, we develop a reinforcement learning approach to reason the robust iterative process. Our RSN generates three candidate shapes. Then a Q-net evaluates three candidate shapes by a reward function, which select the best shape to re-initialize network input to alleviate the local optimization problem of cascade regression methods. Authentic experimental results indicate that our approach consistently outperforms the most state-of-the-art methods on widely evaluated challenging datasets.
CITATION STYLE
Zhu, C., Wang, X., Wu, S., & Yu, Z. (2019). Learning relational-structural networks for robust face alignment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11729 LNCS, pp. 306–316). Springer Verlag. https://doi.org/10.1007/978-3-030-30508-6_25
Mendeley helps you to discover research relevant for your work.