Camera and LiDAR Fusion for Point Cloud Semantic Segmentation

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Perception is a fundamental component of any autonomous driving system. Semantic segmentation is the perception task of assigning semantic class labels to sensor inputs. While autonomous driving systems are currently equipped with a suite of sensors, much focus in the literature has been on semantic segmentation of camera images only. Research in the fusion of different sensor modalities for semantic segmentation has not been investigated as much. Deep learning models based on transformer architectures have proven successful in many tasks in computer vision and natural language processing. This work explores the use of deep learning transformers to fuse information from LiDAR and camera sensors to improve the segmentation of LiDAR point clouds. It also addresses the question of which fusion level in this deep learning framework provides better performance.

Cite

CITATION STYLE

APA

Abdelkader, A., & Moustafa, M. (2023). Camera and LiDAR Fusion for Point Cloud Semantic Segmentation. In Lecture Notes in Networks and Systems (Vol. 464, pp. 499–508). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-19-2394-4_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free