msLPCC: A Multimodal-Driven Scalable Framework for Deep LiDAR Point Cloud Compression

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

LiDAR sensors are widely used in autonomous driving, and the growing storage and transmission demands have made LiDAR point cloud compression (LPCC) a hot research topic. To address the challenges posed by the large-scale and uneven-distribution (spatial and categorical) of LiDAR point data, this paper presents a new multimodal-driven scalable LPCC framework. For the large-scale challenge, we decouple the original LiDAR data into multi-layer point subsets, compress and transmit each layer separately, so as to ensure the reconstruction quality requirement under different scenarios. For the uneven-distribution challenge, we extract, align, and fuse heterologous feature representations, including point modality with position information, depth modality with spatial distance information, and segmentation modality with category information. Extensive experimental results on the benchmark SemanticKITTI database validate that our method outperforms 14 recent representative LPCC methods.

Cite

CITATION STYLE

APA

Wang, M., Huang, R., Dong, H., Lin, D., Song, Y., & Xie, W. (2024). msLPCC: A Multimodal-Driven Scalable Framework for Deep LiDAR Point Cloud Compression. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 5526–5534). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i6.28362

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free