FastV2C-HandNet: Fast Voxel to Coordinate Hand Pose Estimation with 3D Convolutional Neural Networks

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hand pose estimation from monocular depth images has been an important and challenging problem in the Computer Vision community. In this paper, we present a novel approach to estimate 3D hand joint locations from 2D depth images. Unlike most of the previous methods, our model using a voxel to coordinate(V2C) approach captures the 3D spatial information from a depth image using 3D CNNs thereby giving it a greater understanding of the input. We voxelize the input depth map to capture the 3D features of the input and perform 3D data augmentations to make our network robust to real-world scenarios. Our network is trained in an end-to-end manner which mitigates time and space complexity significantly when compared to other methods. Through comprehensive experiments, it is shown that our model outperforms state-of-the-art methods with respect to the time it takes to train and predict 3D hand joint locations. This makes our method more suitable for real-world hand pose estimation.

Cite

CITATION STYLE

APA

Lekhwani, R., & Singh, B. (2021). FastV2C-HandNet: Fast Voxel to Coordinate Hand Pose Estimation with 3D Convolutional Neural Networks. In Advances in Intelligent Systems and Computing (Vol. 1165, pp. 413–426). Springer. https://doi.org/10.1007/978-981-15-5113-0_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free