LiDAR and Camera Sensor Fusion for 2D and 3D Object Detection

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Perception of the world around is key for autonomous driving applications. To allow better perception in many different scenarios vehicles can rely on camera and LiDAR sensors. Both LiDAR and camera provide different information about the world. However, they provide information about the same features. In this research two feature based fusion methods are proposed to combine camera and LiDAR information to improve what we know about the world around, and increase our confidence in what we detect. The two methods work by proposing a region of interest (ROI) and inferring the properties of the object in that ROI. The output of the system contains fused sensor data alongside extra inferred properties of the objects based on the fused sensor data.

Cite

CITATION STYLE

APA

Balemans, D., Vanneste, S., de Hoog, J., Mercelis, S., & Hellinckx, P. (2020). LiDAR and Camera Sensor Fusion for 2D and 3D Object Detection. In Lecture Notes in Networks and Systems (Vol. 96, pp. 798–807). Springer. https://doi.org/10.1007/978-3-030-33509-0_75

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free