Depth representation of lidar point cloud with adaptive surface patching for object classification

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Object segmentation and classification from point cloud light detection and ranging (LiDAR) are increasingly important in 3D mapping and autonomous mobile systems. Even though the distance measurement and object localization from laser pulses are accurate and robust to environmental variations better than an image, the reflected points in each frame are sparse and lack semantic information. The appropriate representation that can extract object characteristics from a single frame point cloud is important for segmenting a moving object before it makes a trail in the reconstruction. We propose depth projection and an adaptive surface patch to extract and emphasize shape, curve, and some texture of the object point cloud for classification. The projection plane is based on the sensor position to ensure that the projected image contains fine details of the object surface. An adaptive surface patch is used to construct an object surface from a sparse point cloud at any distance. The experimental results indicate that the object representation can be used to classify an object by means of an existing image classification method [1].

Cite

CITATION STYLE

APA

Lertniphonphan, K., Komorita, S., Tasaka, K., & Yanagihara, H. (2018). Depth representation of lidar point cloud with adaptive surface patching for object classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10705 LNCS, pp. 367–371). Springer Verlag. https://doi.org/10.1007/978-3-319-73600-6_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free