Time-of-Flight Depth Datasets for Indoor Semantic SLAM

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a medium-scale point cloud dataset for semantic SLAM (Simultaneous Localization and Mapping) acquired using a SwissRanger time-of-flight camera. An indoor environment with relatively unfluctuating lighting conditions is considered for mapping and localization. The camera is positioned on a mobile tripod and ready to capture images at prearranged locations in the environment. The prearranged locations are in fact used as ground truth for estimating the variance with poses calculated from SLAM, and also as initial pose estimates for the ICP algorithm (Iterative Closest Point). An interesting point is that, in this work, no type of Inertial Measurement Units or visual odometry techniques has been utilized, given the fact that, data from time-of-flight cameras is noisy and sensitive to external conditions (such as lighting, transparent surfaces, parallel overlapping surfaces etc.). Furthermore, a large collection of household objects is made in order to label the scene with semantic information. The whole SLAM dataset with pose files along with the point clouds of household objects is a major contribution in this paper apart from mapping and plane detection using a publicly available toolkit. Also, a novel metric, a context-based similarity score, for evaluating SLAM algorithms is presented.

Cite

CITATION STYLE

APA

Ghorpade, V. K., Borrmann, D., Checchin, P., Malaterre, L., & Trassoudaine, L. (2020). Time-of-Flight Depth Datasets for Indoor Semantic SLAM. In Springer Proceedings in Advanced Robotics (Vol. 10, pp. 679–693). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-030-28619-4_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free