V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer

76Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we investigate the application of Vehicle-to-Everything (V2X) communication to improve the perception performance of autonomous vehicles. We present a robust cooperative perception framework with V2X communication using a novel vision Transformer. Specifically, we build a holistic attention model, namely V2X-ViT, to effectively fuse information across on-road agents (i.e., vehicles and infrastructure). V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention, which captures inter-agent interaction and per-agent spatial relationships. These key modules are designed in a unified Transformer architecture to handle common V2X challenges, including asynchronous information sharing, pose errors, and heterogeneity of V2X components. To validate our approach, we create a large-scale V2X perception dataset using CARLA and OpenCDA. Extensive experimental results demonstrate that V2X-ViT sets new state-of-the-art performance for 3D object detection and achieves robust performance even under harsh, noisy environments. The code is available at https://github.com/DerrickXuNu/v2x-vit.

Cite

CITATION STYLE

APA

Xu, R., Xiang, H., Tu, Z., Xia, X., Yang, M. H., & Ma, J. (2022). V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13699 LNCS, pp. 107–124). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19842-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free