Distributed least squares prediction for functional linear regression

2021 ◽  
Author(s):  
Hongzhi Tong

Abstract To cope with the challenges of memory bottleneck and algorithmic scalability when massive data sets are involved, we propose a distributed least squares procedure in the framework of functional linear model and reproducing kernel Hilbert space. This approach divides the big data set into multiple subsets, applies regularized least squares regression on each of them, and then averages the individual outputs as a final prediction. We establish the non-asymptotic prediction error bounds for the proposed learning strategy under some regularity conditions. When the target function only has weak regularity, we also introduce some unlabelled data to construct a semi-supervised approach to enlarge the number of the partitioned subsets. Results in present paper provide a theoretical guarantee that the distributed algorithm can achieve the optimal rate of convergence while allowing the whole data set to be partitioned into a large number of subsets for parallel processing.

2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jia Cai

We investigate a coefficient-based least squares regression problem with indefinite kernels from non-identical unbounded sampling processes. Here non-identical unbounded sampling means the samples are drawn independently but not identically from unbounded sampling processes. The kernel is not necessarily symmetric or positive semi-definite. This leads to additional difficulty in the error analysis. By introducing a suitable reproducing kernel Hilbert space (RKHS) and a suitable intermediate integral operator, elaborate analysis is presented by means of a novel technique for the sample error. This leads to satisfactory results.


2016 ◽  
Vol 14 (06) ◽  
pp. 763-794 ◽  
Author(s):  
Gilles Blanchard ◽  
Nicole Krämer

We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient (CG) algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called “fast convergence rates” depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the ℒ2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


1979 ◽  
Vol 25 (3) ◽  
pp. 432-438 ◽  
Author(s):  
P J Cornbleet ◽  
N Gochman

Abstract The least-squares method is frequently used to calculate the slope and intercept of the best line through a set of data points. However, least-squares regression slopes and intercepts may be incorrect if the underlying assumptions of the least-squares model are not met. Two factors in particular that may result in incorrect least-squares regression coefficients are: (a) imprecision in the measurement of the independent (x-axis) variable and (b) inclusion of outliers in the data analysis. We compared the methods of Deming, Mandel, and Bartlett in estimating the known slope of a regression line when the independent variable is measured with imprecision, and found the method of Deming to be the most useful. Significant error in the least-squares slope estimation occurs when the ratio of the standard deviation of measurement of a single x value to the standard deviation of the x-data set exceeds 0.2. Errors in the least-squares coefficients attributable to outliers can be avoided by eliminating data points whose vertical distance from the regression line exceed four times the standard error the estimate.


2011 ◽  
Vol 130-134 ◽  
pp. 730-733
Author(s):  
Narong Phothi ◽  
Somchai Prakancharoen

This research proposed a comparison of accuracy based on data imputation between unconstrained structural equation modeling (Uncon-SEM) and weighted least squares (WLS) regression. This model is developed by University of California, Irvine (UCI) and measured using the mean magnitude of relative error (MMRE). Experimental data set is created using the waveform generator that contained 21 indicators (1,200 samples) and divided into two groups (1,000 for training and 200 for testing groups). In fact, training group was analyzed by three main factors (F1, F2, and F3) for creating the models. The result of the experiment show MMRE of Uncon-SEM method based on the testing group is 34.29% (accuracy is 65.71%). In contrast, WLS method produces MMRE for testing group is 55.54% (accuracy is 44.46%). So, Uncon-SEM is high accuracy and MMRE than WLS method that is 21.25%.


2019 ◽  
Vol 18 (01) ◽  
pp. 49-78 ◽  
Author(s):  
Cheng Wang ◽  
Ting Hu

In this paper, we study online algorithm for pairwise problems generated from the Tikhonov regularization scheme associated with the least squares loss function and a reproducing kernel Hilbert space (RKHS). This work establishes the convergence for the last iterate of the online pairwise algorithm with the polynomially decaying step sizes and varying regularization parameters. We show that the obtained error rate in [Formula: see text]-norm can be nearly optimal in the minimax sense under some mild conditions. Our analysis is achieved by a sharp estimate for the norms of the learning sequence and the characterization of RKHS using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.


2017 ◽  
Vol 15 (06) ◽  
pp. 815-836 ◽  
Author(s):  
Yulong Zhao ◽  
Jun Fan ◽  
Lei Shi

The ranking problem aims at learning real-valued functions to order instances, which has attracted great interest in statistical learning theory. In this paper, we consider the regularized least squares ranking algorithm within the framework of reproducing kernel Hilbert space. In particular, we focus on analysis of the generalization error for this ranking algorithm, and improve the existing learning rates by virtue of an error decomposition technique from regression and Hoeffding’s decomposition for U-statistics.


Sign in / Sign up

Export Citation Format

Share Document