ContrastMotion: Self-supervised Scene Motion Learning for Large-Scale LiDAR Point Clouds

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a novel self-supervised motion estimator for LiDAR-based autonomous driving via BEV representation. Different from usually adopted self-supervised strategies for data-level structure consistency, we predict scene motion via feature-level consistency between pillars in consecutive frames, which can eliminate the effect caused by noise points and view-changing point clouds in dynamic scenes. Specifically, we propose Soft Discriminative Loss that provides the network with more pseudo-supervised signals to learn discriminative and robust features in a contrastive learning manner. We also propose Gated Multi-frame Fusion block that learns valid compensation between point cloud frames automatically to enhance feature extraction. Finally, pillar association is proposed to predict pillar correspondence probabilities based on feature distance, and whereby further predicts scene motion. Extensive experiments show the effectiveness and superiority of our ContrastMotion on both scene flow and motion prediction tasks.

Cite

CITATION STYLE

APA

Jia, X., Zhou, H., Zhu, X., Guo, Y., Zhang, J., & Ma, Y. (2023). ContrastMotion: Self-supervised Scene Motion Learning for Large-Scale LiDAR Point Clouds. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2023-August, pp. 929–937). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2023/103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free