Real‐time 3D object detection and slam fusion in a low‐cost lidar test vehicle setup

21Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low‐cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low‐cost, LiDAR‐based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we pro-pose the creation of a deep learning‐based inference model for Object Detection deployed in a re-source‐constrained device, as well as a graph‐based SLAM implementation, providing important considerations, explored while taking into account the real‐time processing requirement and pre-senting relevant results demonstrating the usability of the developed work in the context of the proposed low‐cost platform.

Cite

CITATION STYLE

APA

Fernandes, D., Afonso, T., Girão, P., Gonzalez, D., Silva, A., Névoa, R., … Melo‐pinto, P. (2021). Real‐time 3D object detection and slam fusion in a low‐cost lidar test vehicle setup. Sensors, 21(24). https://doi.org/10.3390/s21248381

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free