Learning Autonomous Driving Tasks via Human Feedbacks with Large Language Models

8Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditional autonomous driving systems have mainly focused on making driving decisions without human interaction, overlooking human-like decision-making and human preference required in complex traffic scenarios. To bridge this gap, we introduce a novel framework leveraging Large Language Models (LLMs) for learning human-centered driving decisions from diverse simulation scenarios and environments that incorporate human feedback. Our contributions include a GPT-4-based programming planner that integrates seamlessly with the existing CARLA simulator to understand traffic scenes and react to human instructions. Specifically, we build a human-guided learning pipeline that incorporates human driver feedback directly into the learning process and stores optimal driving programming policy using Retrieval Augmented Generation (RAG). Impressively, our programming planner, with only 50 saved code snippets, can match the performance of baseline extensively trained reinforcement learning (RL) models. Our paper highlights the potential of an LLM-powered shared-autonomy system, pushing the frontier of autonomous driving system development to be more interactive and intuitive.

Cite

CITATION STYLE

APA

Ma, Y., Cao, X., Ye, W., Cui, C., Mei, K., & Wang, Z. (2024). Learning Autonomous Driving Tasks via Human Feedbacks with Large Language Models. In EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024 (pp. 4985–4995). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2024.findings-emnlp.287

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free