ST2: Spatial-Temporal State Transformer for Crowd-Aware Autonomous Navigation

31Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Empowering an intelligent agent with the ability of autonomous navigation in complex and dynamic environments is an important and active research topic in embodied artificial intelligence. In this letter, we address this challenging task from the view of exploiting both the spatial and temporal states of a mobile robot interacting with the crowded environment. Specifically, we propose a Spatial-Temporal State Transformer (ST2) to encode the states while leveraging the deep reinforcement learning method to find the optimal navigation policy accordingly. Technically, the proposed ST2 model consists of a global spatial state encoder and a temporal state encoder, which are built upon the Transformer structure. The spatial state encoder is devised to extract the global spatial features and capture the spatial interaction between pedestrians and the robot. The temporal state encoder is designed to model the temporal correlation among consecutive frames and infer the dynamic relationship of the spatial position transformation. Based on the comprehensive spatial-temporal state representation, the value-based reinforcement learning method is leveraged to obtain the optimal navigation policy. Extensive experiments demonstrate the superiority of the proposed ST2 over representative state-of-the-art methods. The source code will be made publicly available.

Cite

CITATION STYLE

APA

Yang, Y., Jiang, J., Zhang, J., Huang, J., & Gao, M. (2023). ST2: Spatial-Temporal State Transformer for Crowd-Aware Autonomous Navigation. IEEE Robotics and Automation Letters, 8(2), 912–919. https://doi.org/10.1109/LRA.2023.3234815

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free