RoIFusion: 3D Object Detection from LiDAR and Vision

37Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When localizing and detecting 3D objects for autonomous driving scenes, obtaining information from multiple sensors (e.g., camera, LIDAR) is capable of mutually offering useful complementary information to enhance the robustness of 3D detectors. In this paper, a deep neural network architecture, named RoIFusion, is proposed to efficiently fuse the multi-modality features for 3D object detection by leveraging the advantages of LIDAR and camera sensors. In order to achieve this task, instead of densely combining the point-wise feature of the point cloud with the related pixel features, our fusion method novelly aggregates a small set of 3D Region of Interests (RoIs) in the point clouds with the corresponding 2D RoIs in the images, which are beneficial for reducing the computation cost and avoiding the viewpoint misalignment during the feature aggregation from different sensors. Finally, Extensive experiments are performed on the KITTI 3D object detection challenging benchmark to show the effectiveness of our fusion method and demonstrate that our deep fusion approach achieves state-of-the-art performance.

Cite

CITATION STYLE

APA

Chen, C., Fragonara, L. Z., & Tsourdos, A. (2021). RoIFusion: 3D Object Detection from LiDAR and Vision. IEEE Access, 9, 51710–51721. https://doi.org/10.1109/ACCESS.2021.3070379

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free