Object detection in 3D is a key ingredient of various autonomous systems. Many 3D object detection methods rely on LiDAR, as it is robust to illumination conditions and provides accurate distance measurements. To apply LiDAR-based 3D object detection networks for new objects, we need new training datasets. However, because labeling target objects with 3D bounding boxes in LiDAR point clouds requires significant resources and open datasets contain annotations of only car-related classes, it is challenging to deploy LiDAR-based 3D object detectors for detecting objects not related to cars. We propose a system that automatically generates annotated pseudo-LiDAR (APL) data, which requires only stereo images to synthesize 3D bounding box annotations and pseudo-LiDAR points. Using the proposed method, we can dramatically reduce efforts and time for generating a LiDAR-based 3D object detection dataset. By utilizing classes in 2D image datasets, the proposed framework can annotate diverse objects beyond limited classes of existing LiDAR-based 3D object detection datasets. To verify the capability of the synthesized training data, we train 3D object detection networks with the APL data of new classes. The experiments show that the 3D object detection networks trained on the APL data can detect objects of the new classes in LiDAR point clouds, which demonstrates that the proposed method can help LiDAR-based 3D object detectors operate for various objects not covered in existing LiDAR-based 3D object detection datasets.
CITATION STYLE
Oh, C., Jang, Y., Shim, D., Kim, C., Kim, J., & Kim, H. J. (2024). Automatic Pseudo-LiDAR Annotation: Generation of Training Data for 3D Object Detection Networks. IEEE Access, 12, 14227–14237. https://doi.org/10.1109/ACCESS.2024.3355137
Mendeley helps you to discover research relevant for your work.