Autonomous boat driving system using sample-efficient model predictive control-based reinforcement learning approach

30Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this article, we propose a novel reinforcement learning (RL) approach specialized for autonomous boats: sample-efficient probabilistic model predictive control (SPMPC), to iteratively learn control policies of boats in real ocean environments without human prior knowledge. SPMPC addresses difficulties arising from large uncertainties in this challenging application and the need for rapid adaptation to dynamic environmental conditions, and the extremely high cost of exploring and sampling with a real vessel. SPMPC combines a Gaussian process model and model predictive control under a model-based RL framework to iteratively model and quickly respond to uncertain ocean environments while maintaining sample efficiency. A SPMPC system is developed with features including quadrant-based action search rule, bias compensation, and parallel computing that contribute to better control capabilities. It successfully learns to control a full-sized single-engine boat equipped with sensors measuring GPS position, speed, direction, and wind, in a real-world position holding task without models from human demonstration.

Author supplied keywords

Cite

CITATION STYLE

APA

Cui, Y., Osaki, S., & Matsubara, T. (2021). Autonomous boat driving system using sample-efficient model predictive control-based reinforcement learning approach. Journal of Field Robotics, 38(3), 331–354. https://doi.org/10.1002/rob.21990

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free