Integrating data- and model-driven analysis of RGB-D images

N/ACitations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is a growing use of RGB-D sensors in vision-based robot perception. A reliable 3D object recognition requires the integration of image-driven and model-based analysis. Only then the low-level image-like representation can be successfully transformed into a symbolic description with equivalent semantics, considered by the ontology-level representation of an autonomous robot system. An RGB-D image analysis approach is proposed that consists of a data-driven hypothesis generation step and a generic model-based object recognition step. Initially point clusters are created assuming to represent 3D object hypotheses. In parallel, 3D surface patches are estimated, 2D image textures and shapes are classified, building multi-modal image segmentation data. In the model-driven step, a built-in knowledge about basic solids, shapes and textures is used to verify the point clusters in terms of meaningful volume-like aggregates, and to create (or to recognize) generic 3D object models.

Cite

CITATION STYLE

APA

Kasprzak, W., Pietruch, R., Bojar, K., Wilkowski, A., & Kornuta, T. (2015). Integrating data- and model-driven analysis of RGB-D images. Advances in Intelligent Systems and Computing, 323. https://doi.org/10.1007/978-3-319-11310-4_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free