On calibration and alignment of point clouds in a network of RGB-D sensors for tracking

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates the integration of multiple time-of-flight (ToF) depth sensors for the purposes of general 3D tracking and specifically of the hands. The advantage of using a network with multiple sensors is in the increased viewing coverage as well as being able to capture a more complete 3D point cloud representation of the object. Given an ideal point cloud representation, tracking can be accomplished without having to first reconstruct a mesh representation of the object. In utilizing a network of depth sensors, calibration between the sensors and the subsequent data alignment of the point clouds poses key challenges. While there has been research on the merging and alignment of scenes with larger objects such as the human body, there is little research available focusing on a smaller and more complicated object such as the human hand. This paper presents a study on ways to merge and align the point clouds from a network of sensors for object and feature tracking from the combined point clouds.

Cite

CITATION STYLE

APA

Xu, G., & Payandeh, S. (2015). On calibration and alignment of point clouds in a network of RGB-D sensors for tracking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9475, pp. 563–573). Springer Verlag. https://doi.org/10.1007/978-3-319-27863-6_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free