Vision and Radar Multimodal Aided Beam Prediction: Facilitating Metaverse Development

8Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The metaverse requires enhanced communication rates and increased capacity, which can be attained by utilizing millimeter wave (mmWave) and terahertz (THz) communication systems along with large-scale antenna arrays. However, these systems come with a considerable beam training overhead. To address this challenge, this study proposes a novel multimodal deep learning framework based on 3D convolutional transformers for sensor-assisted beam prediction. Our approach utilizes both vision and radar data, resulting in quick and precise beam prediction. Our proposed scheme demonstrates more than 78% top-3 beam prediction accuracy in four different communication scenarios. Furthermore, the total prediction accuracy of our proposed framework is 85.6%, which is nearly 10% higher than using only single-sensory data. Our proposed solution effectively reduces beam training overhead and provides reliable communication support for high-mobility environments.

Cite

CITATION STYLE

APA

Nie, J., Zhou, Q., Mu, J., & Jing, X. (2023). Vision and Radar Multimodal Aided Beam Prediction: Facilitating Metaverse Development. In ISACom 2023 - Proceedings of the 2nd Workshop on Integrated Sensing and Communications for Metaverse, Part of MobiSys 2023 (pp. 13–18). Association for Computing Machinery, Inc. https://doi.org/10.1145/3597065.3597449

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free