2D alignment of face images works well provided images are frontal or nearly so and pitch and yaw remain modest. In spontaneous facial behavior, these constraints often are violated by moderate to large head rotation. 3D alignment from 2D video has been proposed as a solution. A number of approaches have been explored, but comparisons among them have been hampered by the lack of common test data. To enable comparisons among alternative methods, The 3D Face Alignment in the Wild (3DFAW) Challenge, presented for the first time, created an annotated corpus of over 23, 000 multi-view images from four sources together with 3D annotation, made training and validation sets available to investigators, and invited them to test their algorithms on an independent test-set. Eight teams accepted the challenge and submitted test results. We report results for four that provided necessary technical descriptions of their methods. The leading approach achieved prediction consistency error of 3.48%. Corresponding result for the lowest ranked approach was 5.9%. The results suggest that 3D alignment from 2D video is feasible on a wide range of face orientations. Differences among methods are considered and suggest directions for further research.
CITATION STYLE
Jeni, L. A., Tulyakov, S., Yin, L., Sebe, N., & Cohn, J. F. (2016). The first 3D face alignment in the wild (3DFAW) challenge. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9914 LNCS, pp. 511–520). Springer Verlag. https://doi.org/10.1007/978-3-319-48881-3_35
Mendeley helps you to discover research relevant for your work.