scholarly journals Coefficient-Based Regression with Non-Identical Unbounded Sampling

2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jia Cai

We investigate a coefficient-based least squares regression problem with indefinite kernels from non-identical unbounded sampling processes. Here non-identical unbounded sampling means the samples are drawn independently but not identically from unbounded sampling processes. The kernel is not necessarily symmetric or positive semi-definite. This leads to additional difficulty in the error analysis. By introducing a suitable reproducing kernel Hilbert space (RKHS) and a suitable intermediate integral operator, elaborate analysis is presented by means of a novel technique for the sample error. This leads to satisfactory results.

2019 ◽  
Vol 18 (01) ◽  
pp. 49-78 ◽  
Author(s):  
Cheng Wang ◽  
Ting Hu

In this paper, we study online algorithm for pairwise problems generated from the Tikhonov regularization scheme associated with the least squares loss function and a reproducing kernel Hilbert space (RKHS). This work establishes the convergence for the last iterate of the online pairwise algorithm with the polynomially decaying step sizes and varying regularization parameters. We show that the obtained error rate in [Formula: see text]-norm can be nearly optimal in the minimax sense under some mild conditions. Our analysis is achieved by a sharp estimate for the norms of the learning sequence and the characterization of RKHS using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Cheng Wang ◽  
Weilin Nie

We introduce a constructive approach for the least squares algorithms with generalizedK-norm regularization. Different from the previous studies, a stepping-stone function is constructed with some adjustable parameters in error decomposition. It makes the analysis flexible and may be extended to other algorithms. Based on projection technique for sample error and spectral theorem for integral operator in regularization error, we finally derive a learning rate.


2016 ◽  
Vol 14 (06) ◽  
pp. 763-794 ◽  
Author(s):  
Gilles Blanchard ◽  
Nicole Krämer

We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient (CG) algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called “fast convergence rates” depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the ℒ2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.


2021 ◽  
Author(s):  
Hongzhi Tong

Abstract To cope with the challenges of memory bottleneck and algorithmic scalability when massive data sets are involved, we propose a distributed least squares procedure in the framework of functional linear model and reproducing kernel Hilbert space. This approach divides the big data set into multiple subsets, applies regularized least squares regression on each of them, and then averages the individual outputs as a final prediction. We establish the non-asymptotic prediction error bounds for the proposed learning strategy under some regularity conditions. When the target function only has weak regularity, we also introduce some unlabelled data to construct a semi-supervised approach to enlarge the number of the partitioned subsets. Results in present paper provide a theoretical guarantee that the distributed algorithm can achieve the optimal rate of convergence while allowing the whole data set to be partitioned into a large number of subsets for parallel processing.


2018 ◽  
Vol 311 ◽  
pp. 235-244 ◽  
Author(s):  
Xiang-Jun Shen ◽  
Yong Dong ◽  
Jian-Ping Gou ◽  
Yong-Zhao Zhan ◽  
Jianping Fan

Sign in / Sign up

Export Citation Format

Share Document