Epgnet: Enhanced point cloud generation for 3D object detection

4Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Three-dimensional object detection from point cloud data is becoming more and more significant, especially for autonomous driving applications. However, it is difficult for lidar to obtain the complete structure of an object in a real scene due to its scanning characteristics. Although the existing methods have made great progress, most of them ignore the prior information of object structure, such as symmetry. So, in this paper, we use the symmetry of the object to complete the missing part in the point cloud and then detect it. Specifically, we propose a two-stage detection framework. In the first stage, we adopt an encoder–decoder structure to generate the symmetry points of the foreground points and make the symmetry points and the non-empty voxel centers form an enhanced point cloud. In the second stage, the enhanced point cloud is input into the baseline, which is an anchor-based region proposal network, to generate the detection results. Extensive experiments on the challenging KITTI benchmark show the effectiveness of our method, which has better performance on both 3D and BEV (bird’s eye view) object detection compared with some previous state-of-the-art methods.

Cite

CITATION STYLE

APA

Chen, Q., Fan, C., Jin, W., Zou, L., Li, F., Li, X., … Liu, Y. (2020). Epgnet: Enhanced point cloud generation for 3D object detection. Sensors (Switzerland), 20(23), 1–17. https://doi.org/10.3390/s20236927

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free