A Deep Coordination Graph Convolution Reinforcement Learning for Multi-Intelligent Vehicle Driving Policy

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the growing up of Internet of Things technology, the application of Internet of Things has been popularized in the field of intelligent vehicles. Therefore, more artificial intelligence algorithms, especially DRL methods, are more widely used in autonomous driving. A large number of deep reinforcement learning (RL) technologies are continuously applied to the behavior planning module of single-vehicle autonomous driving in early. However, autonomous driving is an environment where multi-intelligent vehicles coexist, interact with each other, and dynamically change. In this environment, multiagent RL technology is one of the most promising technologies for solving the coordination behavior planning problem of multivehicles. However, the research related to this topic is rare. This paper introduces a dynamic coordination graph (CG) convolution technology for the cooperative learning of multi-intelligent vehicles. This method dynamically constructs a CG model among multiple vehicles, effectively reducing the impact of unrelated intelligent vehicles and simplifying the learning process. The relationship between intelligent vehicles is refined using the attention mechanism, and the graph convolution RL technology is used to simulate the message-passing aggregation algorithm to maximize the local utility and obtain the maximum joint utility to guide coordination learning. Driving samples are used as training data, and the model guided by reward shaping is combined with the model of the free graph convolution RL method, which enables our proposed method to achieve high gradualness and improve its learning efficiency. In addition, as the graph convolutional RL algorithm shares parameters between agents, it can easily build scales that are suitable for large-scale multiagent systems, such as traffic environments. Finally, the proposed algorithm is tested and verified for the multivehicle cooperative lane-changing problem in the simulation environment of autonomous driving. Experimental results show that our proposed method has better value function representation in that it can learn better coordination driving policies than traditional dynamic coordination algorithms.

References Powered by Scopus

A survey of deep learning techniques for autonomous driving

1214Citations
N/AReaders
Get full text

Deep Reinforcement Learning for Autonomous Driving: A Survey

1181Citations
N/AReaders
Get full text

Mobility models for vehicular ad hoc networks: A survey and taxonomy

540Citations
N/AReaders
Get full text

Cited by Powered by Scopus

SIF-STGDAN: A Social Interaction Force Spatial-Temporal Graph Dynamic Attention Network for Decision-Making of Connected and Autonomous Vehicles

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Si, H., Tan, G., & Zuo, H. (2022). A Deep Coordination Graph Convolution Reinforcement Learning for Multi-Intelligent Vehicle Driving Policy. Wireless Communications and Mobile Computing, 2022. https://doi.org/10.1155/2022/9665421

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

67%

Researcher 2

33%

Readers' Discipline

Tooltip

Engineering 4

80%

Mathematics 1

20%

Save time finding and organizing research with Mendeley

Sign up for free