In this article, we propose a novel reinforcement learning (RL) approach specialized for autonomous boats: sample-efficient probabilistic model predictive control (SPMPC), to iteratively learn control policies of boats in real ocean environments without human prior knowledge. SPMPC addresses difficulties arising from large uncertainties in this challenging application and the need for rapid adaptation to dynamic environmental conditions, and the extremely high cost of exploring and sampling with a real vessel. SPMPC combines a Gaussian process model and model predictive control under a model-based RL framework to iteratively model and quickly respond to uncertain ocean environments while maintaining sample efficiency. A SPMPC system is developed with features including quadrant-based action search rule, bias compensation, and parallel computing that contribute to better control capabilities. It successfully learns to control a full-sized single-engine boat equipped with sensors measuring GPS position, speed, direction, and wind, in a real-world position holding task without models from human demonstration.
CITATION STYLE
Cui, Y., Osaki, S., & Matsubara, T. (2021). Autonomous boat driving system using sample-efficient model predictive control-based reinforcement learning approach. Journal of Field Robotics, 38(3), 331–354. https://doi.org/10.1002/rob.21990
Mendeley helps you to discover research relevant for your work.