Object recognition and localization from 3D point clouds by maximum-likelihood estimation

6Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike ‘interest point’-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3mm.

Cite

CITATION STYLE

APA

Dantanarayana, H. G., & Huntley, J. M. (2017). Object recognition and localization from 3D point clouds by maximum-likelihood estimation. Royal Society Open Science, 4(8). https://doi.org/10.1098/rsos.160693

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free