optimal rate of convergence
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 16)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
pp. 1-36
Author(s):  
Joris Pinkse ◽  
Karl Schurter

We estimate the density and its derivatives using a local polynomial approximation to the logarithm of an unknown density function f. The estimator is guaranteed to be non-negative and achieves the same optimal rate of convergence in the interior as on the boundary of the support of f. The estimator is therefore well-suited to applications in which non-negative density estimates are required, such as in semiparametric maximum likelihood estimation. In addition, we show that our estimator compares favorably with other kernel-based methods, both in terms of asymptotic performance and computational ease. Simulation results confirm that our method can perform similarly or better in finite samples compared to these alternative methods when they are used with optimal inputs, that is, an Epanechnikov kernel and optimally chosen bandwidth sequence. We provide code in several languages.


Author(s):  
Vesa Kaarnioja ◽  
Yoshihito Kazashi ◽  
Frances Y. Kuo ◽  
Fabio Nobile ◽  
Ian H. Sloan

AbstractThis paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice—a setting that, as pointed out by Zeng et al. (Monte Carlo and Quasi-Monte Carlo Methods 2004, Springer, New York, 2006) and Zeng et al. (Constr. Approx. 30: 529–555, 2009), allows fast evaluation by fast Fourier transform, so avoiding the need for a linear solver. The main contribution of the paper is the application to the approximation problem for uncertainty quantification of elliptic partial differential equations, with the diffusion coefficient given by a random field that is periodic in the stochastic variables, in the model proposed recently by Kaarnioja et al. (SIAM J Numer Anal 58(2): 1068–1091, 2020). The paper gives a full error analysis, and full details of the construction of lattices needed to ensure a good (but inevitably not optimal) rate of convergence and an error bound independent of dimension. Numerical experiments support the theory.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Subhankar Mondal ◽  
M. Thamban Nair

Abstract An inverse problem of identifying the diffusion coefficient in matrix form in a parabolic PDE is considered. Following the idea of natural linearization, considered by Cao and Pereverzev (2006), the nonlinear inverse problem is transformed into a problem of solving an operator equation where the operator involved is linear. Solving the linear operator equation turns out to be an ill-posed problem. The method of Tikhonov regularization is employed for obtaining stable approximations and its finite-dimensional analysis is done based on the Galerkin method, for which an orthogonal projection on the space of matrices with entries from L 2 ⁢ ( Ω ) L^{2}(\Omega) is defined. Since the error estimates in Tikhonov regularization method rely heavily on the adjoint operator, an explicit representation of adjoint of the linear operator involved is obtained. For choosing the regularizing parameter, the adaptive technique is employed in order to obtain order optimal rate of convergence. For the relaxed noisy data, we describe a procedure for obtaining a smoothed version so as to obtain the error estimates. Numerical experiments are carried out for a few illustrative examples.


Author(s):  
Oleg Butkovsky ◽  
Konstantinos Dareiotis ◽  
Máté Gerencsér

AbstractWe give a new take on the error analysis of approximations of stochastic differential equations (SDEs), utilizing and developing the stochastic sewing lemma of Lê (Electron J Probab 25:55, 2020. 10.1214/20-EJP442). This approach allows one to exploit regularization by noise effects in obtaining convergence rates. In our first application we show convergence (to our knowledge for the first time) of the Euler–Maruyama scheme for SDEs driven by fractional Brownian motions with non-regular drift. When the Hurst parameter is $$H\in (0,1)$$ H ∈ ( 0 , 1 ) and the drift is $$\mathcal {C}^\alpha $$ C α , $$\alpha \in [0,1]$$ α ∈ [ 0 , 1 ] and $$\alpha >1-1/(2H)$$ α > 1 - 1 / ( 2 H ) , we show the strong $$L_p$$ L p and almost sure rates of convergence to be $$((1/2+\alpha H)\wedge 1) -\varepsilon $$ ( ( 1 / 2 + α H ) ∧ 1 ) - ε , for any $$\varepsilon >0$$ ε > 0 . Our conditions on the regularity of the drift are optimal in the sense that they coincide with the conditions needed for the strong uniqueness of solutions from Catellier and Gubinelli (Stoch Process Appl 126(8):2323–2366, 2016. 10.1016/j.spa.2016.02.002). In a second application we consider the approximation of SDEs driven by multiplicative standard Brownian noise where we derive the almost optimal rate of convergence $$1/2-\varepsilon $$ 1 / 2 - ε of the Euler–Maruyama scheme for $$\mathcal {C}^\alpha $$ C α drift, for any $$\varepsilon ,\alpha >0$$ ε , α > 0 .


2021 ◽  
Vol 121 (2) ◽  
pp. 171-194
Author(s):  
Son N.T. Tu

Let u ε and u be viscosity solutions of the oscillatory Hamilton–Jacobi equation and its corresponding effective equation. Given bounded, Lipschitz initial data, we present a simple proof to obtain the optimal rate of convergence O ( ε ) of u ε → u as ε → 0 + for a large class of convex Hamiltonians H ( x , y , p ) in one dimension. This class includes the Hamiltonians from classical mechanics with separable potential. The proof makes use of optimal control theory and a quantitative version of the ergodic theorem for periodic functions in dimension n = 1.


Author(s):  
Friedrich Götze ◽  
Jonas Jalowy

The aim of this paper is to investigate the Kolmogorov distance of the Circular Law to the empirical spectral distribution of non-Hermitian random matrices with independent entries. The optimal rate of convergence is determined by the Ginibre ensemble and is given by [Formula: see text]. A smoothing inequality for complex measures that quantitatively relates the uniform Kolmogorov-like distance to the concentration of logarithmic potentials is shown. Combining it with results from Local Circular Laws, we apply it to prove nearly optimal rate of convergence to the Circular Law in Kolmogorov distance. Furthermore, we show that the same rate of convergence holds for the empirical measure of the roots of Weyl random polynomials.


Author(s):  
Jialin Hong ◽  
Chuying Huang ◽  
Xu Wang

Abstract This paper investigates numerical schemes for stochastic differential equations driven by multi-dimensional fractional Brownian motions (fBms) with Hurst parameter $H\in (\frac 12,1)$. Based on the continuous dependence of numerical solutions on the driving noises, we propose the order conditions of Runge–Kutta methods for the strong convergence rate $2H-\frac 12$, which is the optimal strong convergence rate for approximating the Lévy area of fBms. We provide an alternative way to analyse the convergence rate of explicit schemes by adding ‘stage values’ such that the schemes are interpreted as Runge–Kutta methods. Taking advantage of this technique the strong convergence rate of simplified step-$N$ Euler schemes is obtained, which gives an answer to a conjecture in Deya et al. (2012) when $H\in (\frac 12,1)$. Numerical experiments verify the theoretical convergence rate.


2020 ◽  
Vol 28 (2) ◽  
pp. 75-98 ◽  
Author(s):  
Boniface Nkemzi ◽  
Michael Jung

AbstractIn [Nkemzi and Jung, 2013] explicit extraction formulas for the computation of the edge flux intensity functions for the Laplacian at axisymmetric edges are presented. The present paper proposes a new adaptation for the Fourier-finite-element method for efficient numerical treatment of boundary value problems for the Poisson equation in axisymmetric domains Ω̂ ⊂ ℝ3 with edges. The novelty of the method is the use of the explicit extraction formulas for the edge flux intensity functions to define a postprocessing procedure of the finite element solutions of the reduced boundary value problems on the two-dimensional meridian of Ω̂. A priori error estimates show that the postprocessing finite element strategy exhibits optimal rate of convergence on regular meshes. Numerical experiments that validate the theoretical results are presented.


2020 ◽  
Vol 24 ◽  
pp. 408-434
Author(s):  
Benoît R. Kloeckner

We propose a “decomposition method” to prove non-asymptotic bound for the convergence of empirical measures in various dual norms. The main point is to show that if one measures convergence in duality with sufficiently regular observables, the convergence is much faster than for, say, merely Lipschitz observables. Actually, assuming s derivatives with s > d∕2 (d the dimension) ensures an optimal rate of convergence of 1/√n (n the number of samples). The method is flexible enough to apply to Markov chains which satisfy a geometric contraction hypothesis, assuming neither stationarity nor reversibility, with the same convergence speed up to a power of logarithm factor. Our results are stated as controls of the expected distance between the empirical measure and its limit, but we explain briefly how the classical method of bounded difference can be used to deduce concentration estimates.


2019 ◽  
Vol 17 (05) ◽  
pp. 837-851
Author(s):  
Huihui Qin ◽  
Xin Guo

Nowadays, the extensive collection and analyzing of data is stimulating widespread privacy concerns, and therefore is increasing tensions between the potential sources of data and researchers. A privacy-friendly learning framework can help to ease the tensions, and to free up more data for research. We propose a new algorithm, LESS (Learning with Empirical feature-based Summary statistics from Semi-supervised data), which uses only summary statistics instead of raw data for regression learning. The selection of empirical features serves as a trade-off between prediction precision and the protection of privacy. We show that LESS achieves the minimax optimal rate of convergence in terms of the size of the labeled sample. LESS extends naturally to the applications where data are separately held by different sources. Compared with the existing literature on distributed learning, LESS removes the restriction of minimum sample size on single data sources.


Sign in / Sign up

Export Citation Format

Share Document