scholarly journals Reduced Basis Greedy Selection Using Random Training Sets

2020 ◽  
Vol 54 (5) ◽  
pp. 1509-1524 ◽  
Author(s):  
Albert Cohen ◽  
Wolfgang Dahmen ◽  
Ronald DeVore ◽  
James Nichols

Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in Binev et al. (SIAM J. Math. Anal. 43 (2011) 1457–1472) and DeVore et al. (Constructive Approximation 37 (2013) 455–466), where it is shown that the reduced basis space Vn of dimension n, constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov n-width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite training set obtained through a discretization of the parameter domain. To guarantee a final approximation error ε for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy of order ε. Hence, the size of the training set is the ε covering number for M and this covering number typically behaves like exp(Cε−1/s) for some C > 0 when the solution manifold has n-width decay O(n−s). Thus, the shear size of the training set prohibits implementation of the algorithm when ε is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in ε−1. Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions.

Author(s):  
Gitta Kutyniok ◽  
Philipp Petersen ◽  
Mones Raslan ◽  
Reinhold Schneider

AbstractWe derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations. In particular, without any knowledge of its concrete shape, we use the inherent low dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical neural network approximation results. Concretely, we use the existence of a small reduced basis to construct, for a large variety of parametric partial differential equations, neural networks that yield approximations of the parametric solution maps in such a way that the sizes of these networks essentially only depend on the size of the reduced basis.


2021 ◽  
Vol 89 (3) ◽  
Author(s):  
Sridhar Chellappa ◽  
Lihong Feng ◽  
Peter Benner

AbstractWe present a subsampling strategy for the offline stage of the Reduced Basis Method. The approach is aimed at bringing down the considerable offline costs associated with using a finely-sampled training set. The proposed algorithm exploits the potential of the pivoted QR decomposition and the discrete empirical interpolation method to identify important parameter samples. It consists of two stages. In the first stage, we construct a low-fidelity approximation to the solution manifold over a fine training set. Then, for the available low-fidelity snapshots of the output variable, we apply the pivoted QR decomposition or the discrete empirical interpolation method to identify a set of sparse sampling locations in the parameter domain. These points reveal the structure of the parametric dependence of the output variable. The second stage proceeds with a subsampled training set containing a by far smaller number of parameters than the initial training set. Different subsampling strategies inspired from recent variants of the empirical interpolation method are also considered. Tests on benchmark examples justify the new approach and show its potential to substantially speed up the offline stage of the Reduced Basis Method, while generating reliable reduced-order models.


2021 ◽  
Vol 47 (3) ◽  
Author(s):  
Michael Hinze ◽  
Denis Korolev

AbstractIn this paper, we propose a certified reduced basis (RB) method for quasilinear parabolic problems with strongly monotone spatial differential operator. We provide a residual-based a posteriori error estimate for a space-time formulation and the corresponding efficiently computable bound for the certification of the method. We introduce a Petrov-Galerkin finite element discretization of the continuous space-time problem and use it as our reference in a posteriori error control. The Petrov-Galerkin discretization is further approximated by the Crank-Nicolson time-marching problem. It allows to use a POD-Greedy approach to construct the reduced-basis spaces of small dimensions and to apply the Empirical Interpolation Method (EIM) to guarantee the efficient offline-online computational procedure. In our approach, we compute the reduced basis solution in a time-marching framework while the RB approximation error in a space-time norm is controlled by our computable bound. Therefore, we combine a POD-Greedy approximation with a space-time Galerkin method.


2012 ◽  
Vol 46 (3) ◽  
pp. 595-603 ◽  
Author(s):  
Annalisa Buffa ◽  
Yvon Maday ◽  
Anthony T. Patera ◽  
Christophe Prud’homme ◽  
Gabriel Turinici

Author(s):  
Martin E Hess ◽  
Jan Hesthaven ◽  
Peter Benner

Simulation of electromagnetic and optical wave propagation in, e.g. water, fog or dielectric waveguides requires modeling of linear, temporally dispersive media. Using a POD-greedy sampling driven by an error indicator, we seek to generate a reduced model which accurately captures the dynamics. Typically, the reduced basis model reduction reduces the model order by a factor of more than 100, while maintaining an approximation error of less than 1%.


Author(s):  
Shadi Alameddin ◽  
Amélie Fau ◽  
David Néron ◽  
Pierre Ladevèze ◽  
Udo Nackenhorst

The solution of structural problems with nonlinear material behaviour in a model order reduction framework is investigated in this paper. In such a framework, greedy algorithms or adaptive strategies are interesting as they adjust the reduced order basis (ROB) to the problem of interest. However, these greedy strategies may lead to an excessive increase in the size of the reduced basis, i.e. the solution is no more represented in its optimal low-dimensional expansion. Here, an optimised strategy is proposed to maintain, at each step of the greedy algorithm, the lowest dimension of the PGD basis using a randomised SVD algorithm. Comparing to conventional approaches such as Gram-Schmidt orthonormalisation or deterministic SVD, it is shown to be very efficient both in terms of numerical cost and optimality of the reduced basis. Examples with different mesh densities are investigated to demonstrate the numerical efficiency of the presented method.


2011 ◽  
Author(s):  
Jeffrey S. Katz ◽  
John F. Magnotti ◽  
Anthony A. Wright

Sign in / Sign up

Export Citation Format

Share Document