scholarly journals Optimum kernels

Author(s):  
Jitka Poměnková

Kernel smoothers belong to the most popular nonparametric functional estimates. They provide a simple way of finding structure in data. Kernel smoothing can be very well applied on the regression model. In the context of kernel estimates of a regression function, the choice of a kernel from the different points o view can be investigated. The main idea of this paper is to present construction of the optimal kernel and edge optimal kernel by means of the Gegenbauer and Legendre polynomial.

Author(s):  
Jitka Poměnková

Kernel smoothers belong to the most popular nonparametric functional estimates. They provide a simple way of finding structure in data. The idea of the kernel smoothing can be applied to a simple fixed design regression model. This article is focused on kernel smoothing for fixed design regresion model with three types of estimators, the Gasser-Müller estimator, the Nadaraya-Watson estimator and the local linear estimator. At the end of this article figures for ilustration of desribed estimators on simulated and real data sets are shown.


2018 ◽  
Vol 15 (2) ◽  
pp. 20 ◽  
Author(s):  
Budi Lestari

Abstract Regression model of bi-respond nonparametric is a regression model which is illustrating of the connection pattern between respond variable and one or more predictor variables, where between first respond and second respond have correlation each other. In this paper, we discuss the estimating functions of regression in regression model of bi-respond nonparametric by using different two estimation techniques, namely, smoothing spline and kernel. This study showed that for using smoothing spline and kernel, the estimator function of regression which has been obtained in observation is a regression linier. In addition, both estimators that are obtained from those two techniques are systematically only different on smoothing matrices. Keywords: kernel estimator, smoothing spline estimator, regression function, bi-respond nonparametric regression model. AbstrakModel regresi nonparametrik birespon adalah suatu model regresi yang menggambarkan pola hubungan antara dua variabel respon dan satu atau beberapa variabel prediktor dimana antara respon pertama dan respon kedua berkorelasi. Dalam makalah ini dibahas estimasi fungsi regresi dalam  model regresi nonparametrik birespon menggunakan dua teknik estimasi yang berbeda, yaitu smoothing spline dan kernel. Hasil studi ini menunjukkan bahwa, baik menggunakan smoothing spline maupun menggunakan kernel, estimator fungsi regresi yang didapatkan merupakan fungsi linier dalam observasi. Selain itu, kedua estimator fungsi regresi yang didapatkan dari kedua teknik estimasi tersebut secara matematis hanya dibedakan oleh matriks penghalusnya.Kata Kunci : Estimator Kernel, Estimator Smoothing Spline, Fungsi Regresi, Model Regresi Nonparametrik Birespon.


Stats ◽  
2020 ◽  
Vol 3 (2) ◽  
pp. 120-136
Author(s):  
Ersin Yılmaz ◽  
Syed Ejaz Ahmed ◽  
Dursun Aydın

This paper aims to solve the problem of fitting a nonparametric regression function with right-censored data. In general, issues of censorship in the response variable are solved by synthetic data transformation based on the Kaplan–Meier estimator in the literature. In the context of synthetic data, there have been different studies on the estimation of right-censored nonparametric regression models based on smoothing splines, regression splines, kernel smoothing, local polynomials, and so on. It should be emphasized that synthetic data transformation manipulates the observations because it assigns zero values to censored data points and increases the size of the observations. Thus, an irregularly distributed dataset is obtained. We claim that adaptive spline (A-spline) regression has the potential to deal with this irregular dataset more easily than the smoothing techniques mentioned here, due to the freedom to determine the degree of the spline, as well as the number and location of the knots. The theoretical properties of A-splines with synthetic data are detailed in this paper. Additionally, we support our claim with numerical studies, including a simulation study and a real-world data example.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Christophe Chesneau

We investigate the estimation of a multiplicative separable regression function from a bidimensional nonparametric regression model with random design. We present a general estimator for this problem and study its mean integrated squared error (MISE) properties. A wavelet version of this estimator is developed. In some situations, we prove that it attains the standard unidimensional rate of convergence under the MISE over Besov balls.


Author(s):  
Loc Nguyen

Expectation maximization (EM) algorithm is a powerful mathematical tool for estimating statistical parameter when data sample contains hidden part and observed part. EM is applied to learn finite mixture model in which the whole distribution of observed variable is average sum of partial distributions. Coverage ratio of every partial distribution is specified by the probability of hidden variable. An application of mixture model is soft clustering in which cluster is modeled by hidden variable whereas each data point can be assigned to more than one cluster and degree of such assignment is represented by the probability of hidden variable. However, such probability in traditional mixture model is simplified as a parameter, which can cause loss of valuable information. Therefore, in this research I propose a so-called conditional mixture model (CMM) in which the probability of hidden variable is modeled as a full probabilistic density function (PDF) that owns individual parameter. CMM aims to extend mixture model. I also propose an application of CMM which is called adaptive regressive model (ARM). Traditional regression model is effective when data sample is scattered equally. If data points are grouped into clusters, regression model tries to learn a unified regression function which goes through all data points. Obviously, such unified function is not effective to evaluate response variable based on grouped data points. The concept “adaptive” of ARM means that ARM solves the ineffectiveness problem by selecting the best cluster of data points firstly and then evaluating response variable within such best cluster. In order words, ARM reduces estimation space of regression model so as to gain high accuracy in calculation.


2021 ◽  
Vol 26 (3) ◽  
Author(s):  
Muntadher Almusaedi ◽  
Ahmad Naeem Flaih

Bayesian regression analysis has great importance in recent years, especially in the Regularization method, Such as ridge, Lasso, adaptive lasso, elastic net methods, where choosing the prior distribution of the interested parameter is the main idea in the Bayesian regression analysis. By penalizing the Bayesian regression model, the variance of the estimators are reduced notable and the bias is getting smaller. The tradeoff between the bias and variance of the penalized Bayesian regression estimator consequently produce more interpretable model with more prediction accuracy. In this paper, we proposed new hierarchical model for the Bayesian quantile regression by employing the scale mixture of normals mixing with truncated gamma distribution that stated by (Li and Lin, 2010) as Laplace prior distribution. Therefore, new Gibbs sampling algorithms are introduced. A comparison has made with classical quantile regression model and with lasso quantile regression model by conducting simulations studies. Our model is comparable and gives better results.


2021 ◽  
Vol 19 (1) ◽  
pp. 1056-1068
Author(s):  
Yingxia Chen

Abstract In this paper, we consider the regression model with fixed design: Y i = g ( x i ) + ε i {Y}_{i}=g\left({x}_{i})+{\varepsilon }_{i} , 1 ≤ i ≤ n 1\le i\le n , where { x i } \left\{{x}_{i}\right\} are the nonrandom design points, and { ε i } \left\{{\varepsilon }_{i}\right\} is a sequence of martingale, and g g is an unknown function. Nonparametric estimator g n ( x ) {g}_{n}\left(x) of g ( x ) g\left(x) will be introduced and its strong convergence properties are established.


2019 ◽  
Vol 09 (04) ◽  
pp. 2150001
Author(s):  
Yong He ◽  
Hao Sun ◽  
Jiadong Ji ◽  
Xinsheng Zhang

In this paper, we innovatively propose an extremely flexible semi-parametric regression model called Multi-response Trans-Elliptical Regression (MTER) Model, which can capture the heavy-tail characteristic and tail dependence of both responses and covariates. We investigate the feature screening procedure for the MTER model, in which Kendall’ tau-based canonical correlation estimators are proposed to characterize the correlation between each transformed predictor and the multivariate transformed responses. The main idea is to substitute the classical canonical correlation ranking index in [X. B. Kong, Z. Liu, Y. Yao and W. Zhou, Sure screening by ranking the canonical correlations, TEST 26 (2017) 1–25] by a carefully constructed non-parametric version. The sure screening property and ranking consistency property are established for the proposed procedure. Simulation results show that the proposed method is much more powerful to distinguish the informative features from the unimportant ones than some state-of-the-art competitors, especially for heavy-tailed distributions and high-dimensional response. At last, a real data example is given to illustrate the effectiveness of the proposed procedure.


Sign in / Sign up

Export Citation Format

Share Document