Inverse Reinforcement Learning of Pedestrian-Robot Coordination

10Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We apply inverse reinforcement learning (IRL) with a novel cost feature to the problem of robot navigation in human crowds. Consistent with prior empirical work on pedestrian behavior, the feature anticipates collisions between agents. We efficiently learn cost functions in continuous space from high-dimensional examples of public crowd motion data, assuming locally optimal examples. We evaluate the accuracy and predictive power of the learned models on test examples that we attempt to reproduce by optimizing the learned cost functions. We show that the predictions of our models outperform a recent related approach from the literature. The learned cost functions are incorporated into an optimal controller for a robotic wheelchair. We evaluate its performance in qualitative experiments where it autonomously travels between pedestrians, which it perceives through an on-board tracking system. The results show that our approach often generates appropriate motion plans that efficiently complement the pedestrians' motions.

Cite

CITATION STYLE

APA

Gonon, D., & Billard, A. (2023). Inverse Reinforcement Learning of Pedestrian-Robot Coordination. IEEE Robotics and Automation Letters, 8(8), 4815–4822. https://doi.org/10.1109/LRA.2023.3289770

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free