Modeling Continuous Motion for 3D Point Cloud Object Tracking

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The task of 3D single object tracking (SOT) with LiDAR point clouds is crucial for various applications, such as autonomous driving and robotics. However, existing approaches have primarily relied on appearance matching or motion modeling within only two successive frames, thereby overlooking the long-range continuous motion property of objects in 3D space. To address this issue, this paper presents a novel approach that views each tracklet as a continuous stream: at each timestamp, only the current frame is fed into the network to interact with multi-frame historical features stored in a memory bank, enabling efficient exploitation of sequential information. To achieve effective cross-frame message passing, a hybrid attention mechanism is designed to account for both long-range relation modeling and local geometric feature extraction. Furthermore, to enhance the utilization of multi-frame features for robust tracking, a contrastive sequence enhancement strategy is proposed, which uses ground truth tracklets to augment training sequences and promote discrimination against false positives in a contrastive manner. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art method by significant margins on multiple benchmarks.

Cite

CITATION STYLE

APA

Luo, Z., Zhang, G., Zhou, C., Wu, Z., Tao, Q., Lu, L., & Lu, S. (2024). Modeling Continuous Motion for 3D Point Cloud Object Tracking. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 4026–4034). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i5.28196

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free