We consider the reduced basis generation in the offline stage. As an alternative for standard Greedy-training methods based upon a-posteriori error estimates on a training subset of the parameter set, we consider a nonlinear optimization combined with a Greedy method. We define an optimization problem for selecting a new parameter value on a given reduced space. This new parameter is then used -in a Greedy fashion- to determine the corresponding snapshot and to update the reduced basis. We show the well-posedness of this nonlinear optimization problem and derive first- and second-order optimality conditions. Numerical comparisons with the standard Greedy-training method are shown.
CITATION STYLE
Urban, K., Volkwein, S., & Zeeb, O. (2014). Greedy Sampling Using Nonlinear Optimization. In Reduced Order Methods for Modeling and Computational Reduction (pp. 137–157). Springer International Publishing. https://doi.org/10.1007/978-3-319-02090-7_5
Mendeley helps you to discover research relevant for your work.