USING SIMULATION DATA from GAMING ENVIRONMENTS for TRAINING A DEEP LEARNING ALGORITHM on 3D POINT CLOUDS

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks (DNNs) and convolutional neural networks (CNNs) have demonstrated greater robustness and accuracy in classifying two-dimensional images and three-dimensional point clouds compared to more traditional machine learning approaches. However, their main drawback is the need for large quantities of semantically labeled training data sets, which are often out of reach for those with resource constraints. In this study, we evaluated the use of simulated 3D point clouds for training a CNN learning algorithm to segment and classify 3D point clouds of real-world urban environments. The simulation involved collecting light detection and ranging (LiDAR) data using a simulated 16 channel laser scanner within the the CARLA (Car Learning to Act) autonomous vehicle gaming environment. We used this labeled data to train the Kernel Point Convolution (KPConv) and KPConv Segmentation Network for Point Clouds (KP-FCNN), which we tested on real-world LiDAR data from the NPM3D benchmark data set. Our results showed that high accuracy can be achieved using data collected in a simulator.

Cite

CITATION STYLE

APA

Spiegel, S., & Chen, J. (2021). USING SIMULATION DATA from GAMING ENVIRONMENTS for TRAINING A DEEP LEARNING ALGORITHM on 3D POINT CLOUDS. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. 8, pp. 67–74). Copernicus GmbH. https://doi.org/10.5194/isprs-annals-VIII-4-W2-2021-67-2021

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free