Towards semantic scene analysis with time-of-flight cameras

19Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

For planning grasps and other object manipulation actions in complex environments, 3D semantic information becomes crucial. This paper focuses on the application of recent 3D Time-of-Flight (ToF) cameras in the context of semantic scene analysis. For being able to acquire semantic information from ToF camera data, we a) pre-process the data including outlier removal, filtering and phase unwrapping for correcting erroneous distance measurements, and b) apply a randomized algorithm for detecting shapes such as planes, spheres, and cylinders. We present experimental results that show that the robustness against noise and outliers of the underlying RANSAC paradigm allows for segmenting and classifying objects in 3D ToF camera data captured in natural mobile manipulation setups. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Holz, D., Schnabel, R., Droeschel, D., Stückler, J., & Behnke, S. (2011). Towards semantic scene analysis with time-of-flight cameras. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6556 LNAI, pp. 121–132). https://doi.org/10.1007/978-3-642-20217-9_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free