Complementary meta-reinforcement learning for fault-adaptive control

8Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Faults are endemic to all systems. Adaptive fault-tolerant control maintains degraded performance when faults occur as opposed to unsafe conditions or catastrophic events. In systems with abrupt faults and strict time constraints, it is imperative for control to adapt quickly to system changes to maintain system operations. We present a meta-reinforcement learning approach that quickly adapts its control policy to changing conditions. The approach builds upon modelagnostic meta learning (MAML). The controller maintains a complement of prior policies learned under system faults. This "library" is evaluated on a system after a new fault to initialize the new policy. This contrasts with MAML, where the controller derives intermediate policies anew, sampled from a distribution of similar systems, to initialize a new policy. Our approach improves sample efficiency of the reinforcement learning process. We evaluate our approach on an aircraft fuel transfer system under abrupt faults.

Cite

CITATION STYLE

APA

Ahmed, I., Quinones-Grueiro, M., & Biswas, G. (2020). Complementary meta-reinforcement learning for fault-adaptive control. In Proceedings of the Annual Conference of the Prognostics and Health Management Society, PHM (Vol. 12). Prognostics and Health Management Society. https://doi.org/10.36001/phmconf.2020.v12i1.1289

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free