scholarly journals Worst-case Recovery Guarantees for Least Squares Approximation Using Random Samples

Author(s):  
Lutz Kämmerer ◽  
Tino Ullrich ◽  
Toni Volkmer

AbstractWe construct a least squares approximation method for the recovery of complex-valued functions from a reproducing kernel Hilbert space on $$D \subset \mathbb {R}^d$$ D ⊂ R d . The nodes are drawn at random for the whole class of functions, and the error is measured in $$L_2(D,\varrho _{D})$$ L 2 ( D , ϱ D ) . We prove worst-case recovery guarantees by explicitly controlling all the involved constants. This leads to new preasymptotic recovery bounds with high probability for the error of hyperbolic Fourier regression on multivariate data. In addition, we further investigate its counterpart hyperbolic wavelet regression also based on least squares to recover non-periodic functions from random samples. Finally, we reconsider the analysis of a cubature method based on plain random points with optimal weights and reveal near-optimal worst-case error bounds with high probability. It turns out that this simple method can compete with the quasi-Monte Carlo methods in the literature which are based on lattices and digital nets.

2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jia Cai

We investigate a coefficient-based least squares regression problem with indefinite kernels from non-identical unbounded sampling processes. Here non-identical unbounded sampling means the samples are drawn independently but not identically from unbounded sampling processes. The kernel is not necessarily symmetric or positive semi-definite. This leads to additional difficulty in the error analysis. By introducing a suitable reproducing kernel Hilbert space (RKHS) and a suitable intermediate integral operator, elaborate analysis is presented by means of a novel technique for the sample error. This leads to satisfactory results.


2019 ◽  
Vol 18 (01) ◽  
pp. 49-78 ◽  
Author(s):  
Cheng Wang ◽  
Ting Hu

In this paper, we study online algorithm for pairwise problems generated from the Tikhonov regularization scheme associated with the least squares loss function and a reproducing kernel Hilbert space (RKHS). This work establishes the convergence for the last iterate of the online pairwise algorithm with the polynomially decaying step sizes and varying regularization parameters. We show that the obtained error rate in [Formula: see text]-norm can be nearly optimal in the minimax sense under some mild conditions. Our analysis is achieved by a sharp estimate for the norms of the learning sequence and the characterization of RKHS using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.


2017 ◽  
Vol 15 (06) ◽  
pp. 815-836 ◽  
Author(s):  
Yulong Zhao ◽  
Jun Fan ◽  
Lei Shi

The ranking problem aims at learning real-valued functions to order instances, which has attracted great interest in statistical learning theory. In this paper, we consider the regularized least squares ranking algorithm within the framework of reproducing kernel Hilbert space. In particular, we focus on analysis of the generalization error for this ranking algorithm, and improve the existing learning rates by virtue of an error decomposition technique from regression and Hoeffding’s decomposition for U-statistics.


2016 ◽  
Vol 14 (06) ◽  
pp. 763-794 ◽  
Author(s):  
Gilles Blanchard ◽  
Nicole Krämer

We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient (CG) algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called “fast convergence rates” depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the ℒ2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.


Acta Numerica ◽  
2013 ◽  
Vol 22 ◽  
pp. 133-288 ◽  
Author(s):  
Josef Dick ◽  
Frances Y. Kuo ◽  
Ian H. Sloan

This paper is a contemporary review of QMC (‘quasi-Monte Carlo’) methods, that is, equal-weight rules for the approximate evaluation of high-dimensional integrals over the unit cube [0,1]s, where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in lattice methods, digital nets, and related themes. Among those recent developments are methods of construction of both lattices and digital nets, to yield QMC rules that have a prescribed rate of convergence for sufficiently smooth functions, and ideally also guaranteed slow growth (or no growth) of the worst-case error as s increases. A crucial role is played by parameters called ‘weights’, since a careful use of the weight parameters is needed to ensure that the worst-case errors in an appropriately weighted function space are bounded, or grow only slowly, as the dimension s increases. Important tools for the analysis are weighted function spaces, reproducing kernel Hilbert spaces, and discrepancy, all of which are discussed with an appropriate level of detail.


2021 ◽  
Author(s):  
Hongzhi Tong

Abstract To cope with the challenges of memory bottleneck and algorithmic scalability when massive data sets are involved, we propose a distributed least squares procedure in the framework of functional linear model and reproducing kernel Hilbert space. This approach divides the big data set into multiple subsets, applies regularized least squares regression on each of them, and then averages the individual outputs as a final prediction. We establish the non-asymptotic prediction error bounds for the proposed learning strategy under some regularity conditions. When the target function only has weak regularity, we also introduce some unlabelled data to construct a semi-supervised approach to enlarge the number of the partitioned subsets. Results in present paper provide a theoretical guarantee that the distributed algorithm can achieve the optimal rate of convergence while allowing the whole data set to be partitioned into a large number of subsets for parallel processing.


Sign in / Sign up

Export Citation Format

Share Document