Interpreted machine learning in fluid dynamics: explaining relaminarisation events in wall-bounded shear flows

14Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Machine Learning (ML) is becoming increasingly popular in fluid dynamics. Powerful ML algorithms such as neural networks or ensemble methods are notoriously difficult to interpret. Here, we introduce the novel Shapley additive explanations (SHAP) algorithm (Lundberg & Lee, Advances in Neural Information Processing Systems, 2017, pp. 4765-4774), a game-theoretic approach that explains the output of a given ML model in the fluid dynamics context. We give a proof of concept concerning SHAP as an explainable artificial intelligence method providing useful and human-interpretable insight for fluid dynamics. To show that the feature importance ranking provided by SHAP can be interpreted physically, we first consider data from an established low-dimensional model based on the self-sustaining process (SSP) in wall-bounded shear flows, where each data feature has a clear physical and dynamical interpretation in terms of known representative features of the near-wall dynamics, i.e. streamwise vortices, streaks and linear streak instabilities. SHAP determines consistently that only the laminar profile, the streamwise vortex and a specific streak instability play a major role in the prediction. We demonstrate that the method can be applied to larger fluid dynamics datasets by a SHAP evaluation on plane Couette flow in a minimal flow unit focussing on the relevance of streaks and their instabilities for the prediction of relaminarisation events. Here, we find that the prediction is based on proxies for streak modulations corresponding to linear streak instabilities within the SSP. That is, the SHAP analysis suggests that the break-up of the self-sustaining cycle is connected with a suppression of streak instabilities.

Cite

CITATION STYLE

APA

Lellep, M., Prexl, J., Eckhardt, B., & Linkmann, M. (2022). Interpreted machine learning in fluid dynamics: explaining relaminarisation events in wall-bounded shear flows. Journal of Fluid Mechanics, 942. https://doi.org/10.1017/jfm.2022.307

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free