Residual Attention Regression for 3D Hand Pose Estimation

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

3D hand pose estimation is an important and challenging task for virtual reality and human-computer interaction. In this paper, we propose a simple and effective residual attention regression model for accurate 3D hand pose estimation from a depth image. The model is trained in an end-to-end fashion. Specifically, we stack different attention modules to capture different types of attention-aware features, and then implement physical constraints of the hand by projecting the pose parameters into a lower-dimensional space. In this way, 3D coordinates of hand joints are estimated directly. The experimental results demonstrate that our proposed residual attention network can achieve superior or comparable performance on three main challenging datasets, where the average 3D error is 9.7 mm on the MSRA dataset, 7.8 mm on the ICVL dataset, and 17.6 mm on the NYU dataset.

Cite

CITATION STYLE

APA

Li, J., Zhang, L., & Ju, Z. (2019). Residual Attention Regression for 3D Hand Pose Estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11743 LNAI, pp. 605–614). Springer Verlag. https://doi.org/10.1007/978-3-030-27538-9_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free