Thresholded spectral algorithms for sparse approximations

2017 ◽  
Vol 15 (03) ◽  
pp. 433-455 ◽  
Author(s):  
Zheng-Chu Guo ◽  
Dao-Hong Xiang ◽  
Xin Guo ◽  
Ding-Xuan Zhou

Spectral algorithms form a general framework that unifies many regularization schemes in learning theory. In this paper, we propose and analyze a class of thresholded spectral algorithms that are designed based on empirical features. Soft thresholding is adopted to achieve sparse approximations. Our analysis shows that without sparsity assumption of the regression function, the output functions of thresholded spectral algorithms are represented by empirical features with satisfactory sparsity, and the convergence rates are comparable to those of the classical spectral algorithms in the literature.

Author(s):  
Huijun Guo ◽  
Junke Kou

This paper considers wavelet estimations of a regression function based on negatively associated sample. We provide upper bound estimations over [Formula: see text] risk of linear and nonlinear wavelet estimators in Besov space, respectively. When the random sample reduces to the independent case, our convergence rates coincide with the optimal convergence rates of classical nonparametric regression estimation.


Author(s):  
Andreas Neuenkirch ◽  
Michaela Szölgyenyi

Abstract We study the strong convergence order of the Euler–Maruyama (EM) scheme for scalar stochastic differential equations with additive noise and irregular drift. We provide a general framework for the error analysis by reducing it to a weighted quadrature problem for irregular functions of Brownian motion. Assuming Sobolev–Slobodeckij-type regularity of order $\kappa \in (0,1)$ for the nonsmooth part of the drift, our analysis of the quadrature problem yields the convergence order $\min \{3/4,(1+\kappa )/2\}-\epsilon$ for the equidistant EM scheme (for arbitrarily small $\epsilon>0$). The cut-off of the convergence order at $3/4$ can be overcome by using a suitable nonequidistant discretization, which yields the strong convergence order of $(1+\kappa )/2-\epsilon$ for the corresponding EM scheme.


2012 ◽  
Vol 28 (5) ◽  
pp. 935-958 ◽  
Author(s):  
Degui Li ◽  
Zudi Lu ◽  
Oliver Linton

Local linear fitting is a popular nonparametric method in statistical and econometric modeling. Lu and Linton (2007, Econometric Theory23, 37–70) established the pointwise asymptotic distribution for the local linear estimator of a nonparametric regression function under the condition of near epoch dependence. In this paper, we further investigate the uniform consistency of this estimator. The uniform strong and weak consistencies with convergence rates for the local linear fitting are established under mild conditions. Furthermore, general results regarding uniform convergence rates for nonparametric kernel-based estimators are provided. The results of this paper will be of wide potential interest in time series semiparametric modeling.


1993 ◽  
Vol 9 (3) ◽  
pp. 451-477 ◽  
Author(s):  
Pedro L. Gozalo

This paper proposes a general framework for specification testing of the regression function in a nonparametric smoothing estimation context. The same analysis can be applied to cases as varied as testing for omission of variables, testing certain nonlinear restrictions in the regressors, and testing the correct specification of some parametric or semiparametric model of interest, for example, testing a certain type of nonlinearity of the regression function. Furthermore, the test can be applied to i.i.d. and time-series data, and some or all of the regressors are allowed to be discrete. A Monte Carlo simulation is used to assess the performance of the test in small and medium samples.


2021 ◽  
Vol 7 (3) ◽  
pp. 3509-3523
Author(s):  
Yanping Liu ◽  
◽  
Juliang Yin

<abstract><p>The varying coefficient model assumes that the regression function depends linearly on some regressors, and that the regression coefficients are smooth functions of other predictor variables. It provides an appreciable flexibility in capturing the underlying dynamics in data and avoids the so-called "curse of dimensionality" in analyzing complex and multivariate nonlinear structures. Existing estimation methods usually assume that the errors for the model are independent; however, they may not be satisfied in practice. In this study, we investigated the estimation for the varying coefficient model with correlated errors via B-spline. The B-spline approach, as a global smoothing method, is computationally efficient. Under suitable conditions, the convergence rates of the proposed estimators were obtained. Furthermore, two simulation examples were employed to demonstrate the performance of the proposed approach and the necessity of considering correlated errors.</p></abstract>


2016 ◽  
Vol 28 (3) ◽  
pp. 525-562 ◽  
Author(s):  
Yunlong Feng ◽  
Shao-Gao Lv ◽  
Hanyuan Hang ◽  
Johan A. K. Suykens

Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005 ). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens ( 2014 ) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.


2016 ◽  
Vol 14 (06) ◽  
pp. 763-794 ◽  
Author(s):  
Gilles Blanchard ◽  
Nicole Krämer

We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient (CG) algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called “fast convergence rates” depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the ℒ2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.


2010 ◽  
Vol 27 (3) ◽  
pp. 522-545 ◽  
Author(s):  
Jan Johannes ◽  
Sébastien Van Bellegem ◽  
Anne Vanhems

This paper studies the estimation of a nonparametric functionϕfrom the inverse problemr=Tϕgiven estimates of the functionrand of the linear transformT. We show that rates of convergence of the estimator are driven by two types of assumptions expressed in a single Hilbert scale. The two assumptions quantify the prior regularity ofϕand the prior link existing betweenTand the Hilbert scale. The approach provides a unified framework that allows us to compare various sets of structural assumptions found in the econometric literature. Moreover, general upper bounds are also derived for the risk of the estimator of the structural functionϕas well as that of its derivatives. It is shown that the bounds cover and extend known results given in the literature. Two important applications are also studied. The first is the blind nonparametric deconvolution on the real line, and the second is the estimation of the derivatives of the nonparametric instrumental regression function via an iterative Tikhonov regularization scheme.


2011 ◽  
Vol 09 (04) ◽  
pp. 369-382
Author(s):  
MING LI ◽  
ANDREA CAPONNETTO

We consider a wide class of error bounds developed in the context of statistical learning theory which are expressed in terms of functionals of the regression function, for instance, its norm in a reproducing kernel Hilbert space or other functional space. These bounds are unstable in the sense that a small perturbation of the regression function can induce an arbitrary large increase of the relevant functional and make the error bound useless. Using a known result involving Fano inequality, we show how stability can be recovered.


Sign in / Sign up

Export Citation Format

Share Document