Using Neural Networks with data Quantization for time Series Analysis in LHC Superconducting Magnets

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

The aim of this paper is to present a model based on the recurrent neural network (RNN) architecture, the long short-term memory (LSTM) in particular, for modeling the work parameters of Large Hadron Collider (LHC) super-conducting magnets. High-resolution data available in the post mortem database were used to train a set of models and compare their performance for various hyper-parameters such as input data quantization and the number of cells. A novel approach to signal level quantization allowed reducing the size of the model, simplifying the tuning of the magnet monitoring system and making the process scalable. The paper shows that an RNN such as the LSTM or a gated recurrent unit (GRU) can be used for modeling high-resolution signals with the accuracy of over 0.95 and a small number of parameters, ranging from 800 to 1200. This makes the solution suitable for hardware implementation, which is essential in the case of monitoring the performance critical and high-speed signal of LHC superconducting magnets.

Cite

CITATION STYLE

APA

Wielgosz, M., & Skoczeń, A. (2019). Using Neural Networks with data Quantization for time Series Analysis in LHC Superconducting Magnets. International Journal of Applied Mathematics and Computer Science, 29(3), 503–515. https://doi.org/10.2478/amcs-2019-0037

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free