Application of reinforcement learning in the LHC tune feedback

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Beam-Based Feedback System (BBFS) was primarily responsible for correcting the beam energy, orbit and tune in the CERN Large Hadron Collider (LHC). A major code renovation of the BBFS was planned and carried out during the LHC Long Shutdown 2 (LS2). This work consists of an explorative study to solve a beam-based control problem, the tune feedback (QFB), utilising state-of-the-art Reinforcement Learning (RL). A simulation environment was created to mimic the operation of the QFB. A series of RL agents were trained, and the best-performing agents were then subjected to a set of well-designed tests. The original feedback controller used in the QFB was reimplemented to compare the performance of the classical approach to the performance of selected RL agents in the test scenarios. Results from the simulated environment show that the RL agent performance can exceed the controller-based paradigm.

Cite

CITATION STYLE

APA

Grech, L., Valentino, G., Alves, D., & Hirlaender, S. (2022). Application of reinforcement learning in the LHC tune feedback. Frontiers in Physics, 10. https://doi.org/10.3389/fphy.2022.929064

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free