Active flow control of a turbulent separation bubble through deep reinforcement learning

16Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The control efficacy of classical periodic forcing and deep reinforcement learning (DRL) is assessed for a turbulent separation bubble (TSB) at Reτ = 180 on the upstream region before separation occurs. The TSB can resemble a separation phenomenon naturally arising in wings, and a successful reduction of the TSB can have practical implications in the reduction of the aviation carbon footprint. We find that the classical zero-net-mas-flux (ZNMF) periodic control is able to reduce the TSB by 15.7%. On the other hand, the DRL-based control achieves 25.3% reduction and provides a smoother control strategy while also being ZNMF. To the best of our knowledge, the current test case is the highest Reynolds-number flow that has been successfully controlled using DRL to this date. In future work, these results will be scaled to well-resolved large-eddy simulation grids. Furthermore, we provide details of our open-source CFD-DRL framework suited for the next generation of exascale computing machines.

Cite

CITATION STYLE

APA

Font, B., Alcántara-Ávila, F., Rabault, J., Vinuesa, R., & Lehmkuhl, O. (2024). Active flow control of a turbulent separation bubble through deep reinforcement learning. In Journal of Physics: Conference Series (Vol. 2753). Institute of Physics. https://doi.org/10.1088/1742-6596/2753/1/012022

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free