End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles

7Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and radar systems raise requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. However, there are two major research gaps in existing works: (i) they lack fusion (and synchronization) of multi-sensors, camera, LiDAR and radar; and (ii) generic scalable, and user-friendly end-to-end implementation. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as camera, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors, namely, camera, LiDAR, and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications and suitable for quick and large-scale practical deployment.

References Powered by Scopus

A flexible new technique for camera calibration

13204Citations
N/AReaders
Get full text

Vision meets robotics: The KITTI dataset

7484Citations
N/AReaders
Get full text

Nuscenes: A multimodal dataset for autonomous driving

3556Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Object Detection, Recognition, and Tracking Algorithms for ADASs—A Study on Recent Trends

12Citations
N/AReaders
Get full text

Open-Source Level 4 Autonomous Shuttle for Last - Mile Mobility

1Citations
N/AReaders
Get full text

Transformer-Based Sensor Fusion For Autonomous Vehicles: A Comprehensive Review

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Gu, J., Lind, A., Chhetri, T. R., Bellone, M., & Sell, R. (2023). End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles. Sensors, 23(15). https://doi.org/10.3390/s23156783

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

43%

Lecturer / Post doc 2

29%

Researcher 2

29%

Readers' Discipline

Tooltip

Computer Science 3

38%

Engineering 3

38%

Pharmacology, Toxicology and Pharmaceut... 1

13%

Earth and Planetary Sciences 1

13%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1
Social Media
Shares, Likes & Comments: 2

Save time finding and organizing research with Mendeley

Sign up for free