Object-level Segmentation of RGBD Data

  • Huang H
  • Jiang H
  • Brenner C
  • et al.
N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Abstract. We propose a novel method to segment Microsoft™Kinect data of indoor scenes with the emphasis on freeform objects. We use the full 3D information for the scene parsing and the segmentation of potential objects instead of treating the depth values as an additional channel of the 2D image. The raw RGBD image is first converted to a 3D point cloud with color. We then group the points into patches, which are derived from a 2D superpixel segmentation. With the assumption that every patch in the point cloud represents (a part of) the surface of an underlying solid body, a hypothetical quasi-3D model – the "synthetic volume primitive" (SVP) is constructed by extending the patch with a synthetic extrusion in 3D. The SVPs vote for a common object via intersection. By this means, a freeform object can be "assembled" from an unknown number of SVPs from arbitrary angles. Besides the intersection, two other criteria, i.e., coplanarity and color coherence, are integrated in the global optimization to improve the segmentation. Experiments demonstrate the potential of the proposed method.

Cite

CITATION STYLE

APA

Huang, H., Jiang, H., Brenner, C., & Mayer, H. (2014). Object-level Segmentation of RGBD Data. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, II3, 73–78. https://doi.org/10.5194/isprsannals-ii-3-73-2014

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free