Safeguarded Progress in Reinforcement Learning: Safe Bayesian Exploration for Control Policy Synthesis

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL), such that the safety constraint violations are bounded at any point during learning. As enforcing safety during training might severely limit the agent's exploration, we propose here a new architecture that handles the trade-off between efficient progress and safety during exploration. As the exploration progresses, we update via Bayesian inference Dirichlet-Categorical models of the transition probabilities of the Markov decision process that describes the environment dynamics. We then propose a way to approximate moments of belief about the risk associated to the action selection policy. We demonstrate that this approach can be easily interleaved with RL and we present experimental results to showcase the performance of the overall architecture.

Cite

CITATION STYLE

APA

Mitta, R., Hasanbeig, H., Wang, J., Kroening, D., Kantaros, Y., & Abate, A. (2024). Safeguarded Progress in Reinforcement Learning: Safe Bayesian Exploration for Control Policy Synthesis. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 21412–21419). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i19.30137

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free