Label propagation for large scale 3D indoor scenes

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

RGB-D mapping or semantic mapping is becoming more and more important for computer vision and robotics. However, manually segmenting and generating semantic labels for RGB-D image sequence or global point cloud will cost a lot of human labors. That is why there still lacks a satisfactory indoor dataset for testing semantic mapping system. While automatic label propagation can help, almost all existing methods were designed for 2D videos which ignore the 3D characteristics of RGB-D images. In this paper, we build a global map for RGB-D image sequence firstly, and then propagate labels on the global map. In this way, we can enforce label consistency over the global scene and require fewer frames to be manually labeled. Also we model the overlap information between images and use a greedy algorithm to automatically choose frames for manual labeling. Experiments demonstrate that our method can reduce manual efforts greatly. For a scene which contains 1831 images, only 22 labeled images can achieve 93% accuracy for label propagation.

Cite

CITATION STYLE

APA

Tang, K., Zhao, Z., & Chen, X. (2015). Label propagation for large scale 3D indoor scenes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9474, pp. 253–264). Springer Verlag. https://doi.org/10.1007/978-3-319-27857-5_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free