approximation rates
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 15)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
Philip Miller ◽  
Thorsten Hohage

AbstractWe study Tikhonov regularization for possibly nonlinear inverse problems with weighted $$\ell ^1$$ ℓ 1 -penalization. The forward operator, mapping from a sequence space to an arbitrary Banach space, typically an $$L^2$$ L 2 -space, is assumed to satisfy a two-sided Lipschitz condition with respect to a weighted $$\ell ^2$$ ℓ 2 -norm and the norm of the image space. We show that in this setting approximation rates of arbitrarily high Hölder-type order in the regularization parameter can be achieved, and we characterize maximal subspaces of sequences on which these rates are attained. On these subspaces the method also converges with optimal rates in terms of the noise level with the discrepancy principle as parameter choice rule. Our analysis includes the case that the penalty term is not finite at the exact solution (’oversmoothing’). As a standard example we discuss wavelet regularization in Besov spaces $$B^r_{1,1}$$ B 1 , 1 r . In this setting we demonstrate in numerical simulations for a parameter identification problem in a differential equation that our theoretical results correctly predict improved rates of convergence for piecewise smooth unknown coefficients.


Author(s):  
Guido Montúfar ◽  
Yu Guang Wang

AbstractLearning mappings of data on manifolds is an important topic in contemporary machine learning, with applications in astrophysics, geophysics, statistical physics, medical diagnosis, biochemistry, and 3D object analysis. This paper studies the problem of learning real-valued functions on manifolds through filtered hyperinterpolation of input–output data pairs where the inputs may be sampled deterministically or at random and the outputs may be clean or noisy. Motivated by the problem of handling large data sets, it presents a parallel data processing approach which distributes the data-fitting task among multiple servers and synthesizes the fitted sub-models into a global estimator. We prove quantitative relations between the approximation quality of the learned function over the entire manifold, the type of target function, the number of servers, and the number and type of available samples. We obtain the approximation rates of convergence for distributed and non-distributed approaches. For the non-distributed case, the approximation order is optimal.


Author(s):  
Gitta Kutyniok ◽  
Philipp Petersen ◽  
Mones Raslan ◽  
Reinhold Schneider

AbstractWe derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations. In particular, without any knowledge of its concrete shape, we use the inherent low dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical neural network approximation results. Concretely, we use the existence of a small reduced basis to construct, for a large variety of parametric partial differential equations, neural networks that yield approximations of the parametric solution maps in such a way that the sizes of these networks essentially only depend on the size of the reduced basis.


Author(s):  
Martin Ehler ◽  
Manuel Gräf ◽  
Sebastian Neumayer ◽  
Gabriele Steidl

AbstractThe approximation of probability measures on compact metric spaces and in particular on Riemannian manifolds by atomic or empirical ones is a classical task in approximation and complexity theory with a wide range of applications. Instead of point measures we are concerned with the approximation by measures supported on Lipschitz curves. Special attention is paid to push-forward measures of Lebesgue measures on the unit interval by such curves. Using the discrepancy as distance between measures, we prove optimal approximation rates in terms of the curve’s length and Lipschitz constant. Having established the theoretical convergence rates, we are interested in the numerical minimization of the discrepancy between a given probability measure and the set of push-forward measures of Lebesgue measures on the unit interval by Lipschitz curves. We present numerical examples for measures on the 2- and 3-dimensional torus, the 2-sphere, the rotation group on $$\mathbb R^3$$ R 3 and the Grassmannian of all 2-dimensional linear subspaces of $${\mathbb {R}}^4$$ R 4 . Our algorithm of choice is a conjugate gradient method on these manifolds, which incorporates second-order information. For efficient gradient and Hessian evaluations within the algorithm, we approximate the given measures by truncated Fourier series and use fast Fourier transform techniques on these manifolds.


2021 ◽  
Vol 47 (1) ◽  
Author(s):  
Fabian Laakmann ◽  
Philipp Petersen

AbstractWe demonstrate that deep neural networks with the ReLU activation function can efficiently approximate the solutions of various types of parametric linear transport equations. For non-smooth initial conditions, the solutions of these PDEs are high-dimensional and non-smooth. Therefore, approximation of these functions suffers from a curse of dimension. We demonstrate that through their inherent compositionality deep neural networks can resolve the characteristic flow underlying the transport equations and thereby allow approximation rates independent of the parameter dimension.


Sign in / Sign up

Export Citation Format

Share Document