Combining Reinforcement Learning and Tensor Networks, with an Application to Dynamical Large Deviations

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

We present a framework to integrate tensor network (TN) methods with reinforcement learning (RL) for solving dynamical optimization tasks. We consider the RL actor-critic method, a model-free approach for solving RL problems, and introduce TNs as the approximators for its policy and value functions. Our "actor-critic with tensor networks"(ACTeN) method is especially well suited to problems with large and factorizable state and action spaces. As an illustration of the applicability of ACTeN we solve the exponentially hard task of sampling rare trajectories in two paradigmatic stochastic models, the East model of glasses and the asymmetric simple exclusion process, the latter being particularly challenging to other methods due to the absence of detailed balance. With substantial potential for further integration with the vast array of existing RL methods, the approach introduced here is promising both for applications in physics and to multi-agent RL problems more generally.

Cite

CITATION STYLE

APA

Gillman, E., Rose, D. C., & Garrahan, J. P. (2024). Combining Reinforcement Learning and Tensor Networks, with an Application to Dynamical Large Deviations. Physical Review Letters, 132(19). https://doi.org/10.1103/PhysRevLett.132.197301

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free