Reinforcement learning for structural health monitoring based on inspection data

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Due to uncertainty associated with fatigue, mechanical structures have to be often inspected, especially in aerospace. In order to reduce inspection effort, fatigue behavior can be predicted based on measurement data and supervised learning methods, such as neural networks or particle filters. For good predictions, much data is needed. However, often only a small number of sensors to collect data are available, e.g., on airplanes due to weight limitations. This paper presents a method where data that is collected during an inspection is utilized to compute an update of the optimal inspection interval. For this purpose, we describe structural health monitoring (SHM) as a Markov decision process and use reinforcement learning for deciding when to inspect next and when to decommission the structure before failure. In order to handle the infinite state space of the SHM decision process, we use two different regression models, namely neural networks (NN) and k-nearest neighbors (KNN), and compare them to the deep Q-learning approach, which is state of the art. The models are applied to a set of crack growth data which is considered to be representative of the general damage evolution of a structure. The results show that reinforcement learning can be utilized for such a decision task, where the KNN model leads to the best performance.

Cite

CITATION STYLE

APA

Pfingstl, S., Schoebel, Y. N., & Zimmermann, M. (2021). Reinforcement learning for structural health monitoring based on inspection data. In Materials Research Proceedings (Vol. 18, pp. 203–210). Association of American Publishers. https://doi.org/10.21741/9781644901311-24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free