Provably Safe Reinforcement Learning via Action Projection Using Reachability Analysis and Polynomial Zonotopes

  • Kochdumper N
  • Krasowski H
  • Wang X
  • et al.
8Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

While reinforcement learning produces very promising results for many applications, its main disadvantage is the lack of safety guarantees, which prevents its use in safety-critical systems. In this work, we address this issue by a safety shield for nonlinear continuous systems that solve reach-avoid tasks. Our safety shield prevents applying potentially unsafe actions from a reinforcement learning agent by projecting the proposed action to the closest safe action. This approach is called action projection and is implemented via mixed-integer optimization. The safety constraints for action projection are obtained by applying parameterized reachability analysis using polynomial zonotopes, which enables to accurately capture the nonlinear effects of the actions on the system. In contrast to other state of the art approaches for action projection, our safety shield can efficiently handle input constraints and dynamic obstacles, eases incorporation of the spatial robot dimensions into the safety constraints, guarantees robust safety despite process noise and measurement errors, and is well suited for high-dimensional systems, as we demonstrate on several challenging benchmark systems.

Cite

CITATION STYLE

APA

Kochdumper, N., Krasowski, H., Wang, X., Bak, S., & Althoff, M. (2023). Provably Safe Reinforcement Learning via Action Projection Using Reachability Analysis and Polynomial Zonotopes. IEEE Open Journal of Control Systems, 2, 79–92. https://doi.org/10.1109/ojcsys.2023.3256305

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free