In statistical modelling with Gaussian process regression, it has been shown that combining (few) high-fidelity data with (many) low-fidelity data can enhance prediction accuracy, compared to prediction based on the few high-fidelity data only. Such information fusion techniques for multi-fidelity data commonly approach the high-fidelity model f h (t) as a function of two variables (t, s), and then use f l (t) as the s data. More generally, the high-fidelity model can be written as a function of several variables (t, s 1 , s 2 ....); the low-fidelity model f l and, say, some of its derivatives can then be substituted for these variables. In this paper, we will explore mathematical algorithms for multi-fidelity information fusion that use such an approach towards improving the representation of the high-fidelity function with only a few training data points. Given that f h may not be a simple function—and sometimes not even a function—of f l , we demonstrate that using additional functions of t, such as derivatives or shifts of f l , can drastically improve the approximation of f h through Gaussian processes. We also point out a connection with ‘embedology’ techniques from topology and dynamical systems. Our illustrative examples range from instructive caricatures to computational biology models, such as Hodgkin–Huxley neural oscillations.
CITATION STYLE
Lee, S., Dietrich, F., Karniadakis, G. E., & Kevrekidis, I. G. (2019). Linking Gaussian process regression with data-driven manifold embeddings for nonlinear data fusion. Interface Focus, 9(3). https://doi.org/10.1098/rsfs.2018.0083
Mendeley helps you to discover research relevant for your work.