On a non-parametric estimator for a regression function from a Hilbert space

1995 ◽  
Vol 3 (2) ◽  
Author(s):  
P. S. KNOPOV ◽  
A. P. KNOPOV
Author(s):  
Fabio Sigrist

AbstractWe introduce a novel boosting algorithm called ‘KTBoost’ which combines kernel boosting and tree boosting. In each boosting iteration, the algorithm adds either a regression tree or reproducing kernel Hilbert space (RKHS) regression function to the ensemble of base learners. Intuitively, the idea is that discontinuous trees and continuous RKHS regression functions complement each other, and that this combination allows for better learning of functions that have parts with varying degrees of regularity such as discontinuities and smooth parts. We empirically show that KTBoost significantly outperforms both tree and kernel boosting in terms of predictive accuracy in a comparison on a wide array of data sets.


1985 ◽  
Vol 1 (1) ◽  
pp. 7-26 ◽  
Author(s):  
A. R. Bergstrom

This paper is concerned with the estimation of a nonlinear regression function which is not assumed to belong to a prespecified parametric family of functions. An orthogonal series estimator is proposed, and Hilbert space methods are used in the derivation of its properties and the proof of several convergence theorems. One of the main objectives of the paper is to provide the theoretical basis for a practical stopping rule which can be used for determining the number of Fourier coefficients to be estimated from a given sample.


Author(s):  
Dafydd Evans ◽  
Antonia J Jones

The aim of non-parametric regression is to model the behaviour of a response vector Y in terms of an explanatory vector X , based only on a finite set of empirical observations. This is usually performed under the additive hypothesis Y = f ( X )+ R , where f ( X )= ( Y | X ) is the true regression function and R is the true residual variable. Subject to a Lipschitz condition on f , we propose new estimators for the moments (scalar response) and covariance (vector response) of the residual distribution, derive their asymptotic properties and discuss their application in practical data analysis.


2021 ◽  
Author(s):  
Unni Krishnan R Nair ◽  
Anish Gupta ◽  
D. A. Sasi Kiran ◽  
Ajay Shrihari ◽  
Vanshil Shah ◽  
...  

Author(s):  
Dadang Priyanto ◽  
Muhammad Zarlis ◽  
Herman Mawengkang ◽  
Syahril Efendi

Data Mining is the process of finding certain patterns and knowledge from big data. In general, the data mining process can be grouped into two categories, namely descriptive data mining and data mining prediction. There are several Math functions that can be used in the data mining process, one of which is the Classification and Regression function. Regression Analysis is also called Prediction analysis, which is a statistical method that is widely used to investigate and model relationships between variables. Regression analysis to estimate the regression curve can be done by analyzing Nonparametric Regression. One well-known method in non-parametric regression is MARS (Multivariate Adaptive Regression Spline). The MARS method is used to overcome the weaknesses of the Linear Regression method. The use of a stepwise backward algorithm with the CQP quadratic programming framework (CQP) from MARS resulted in a new method called CMARS (Conic Multivariate Adaptive Regression Splines). The CMARS method is able to model high dimensional data with nonlinear structures. The flexible nature of the CMARS model can be used in the process of analyzing earthquake predictions, especially in Lombok, West Nusa Tenggara. Test results Obtained a mathematical model of four independent variables gives significant results to the dependent variable, namely Peak Ground Acceleration (PGA). Contributions of independent variables are the distance of the epicenter 100%, magnitude 31.1%, the temperature of the incident location 5.5% and a depth of 3.5%.


Sign in / Sign up

Export Citation Format

Share Document