Object recognition in 3D point cloud of urban street scene

27Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present a novel street scene semantic recognition framework, which takes advantage of 3D point clouds captured by a high-definition LiDAR laser scanner. An important problem in object recognition is the need for sufficient labeled training data to learn robust classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by reduction of scene complexity using non-supervised ground and building segmentation. Our system first automatically segments grounds point cloud, this is because the ground connects almost all other objects and we will use a connect component based algorithm to oversegment the point clouds. Then, using binary range image processing building facades will be detected. Remained point cloud will grouped into voxels which are then transformed to super voxels. Local 3D features extracted from super voxels are classified by trained boosted decision trees and labeled with semantic classes e.g. tree, pedestrian, car, etc. The proposed method is evaluated both quantitatively and qualitatively on a challenging fixed-position Terrestrial Laser Scanning (TLS) Velodyne data set and two Mobile Laser Scanning (MLS), Paris-rue-Madam and NAVTEQ True databases. Robust scene parsing results are reported.

Cite

CITATION STYLE

APA

Babahajiani, P., Fan, L., & Gabbouj, M. (2015). Object recognition in 3D point cloud of urban street scene. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9008, pp. 177–190). Springer Verlag. https://doi.org/10.1007/978-3-319-16628-5_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free