THE UNIQUENESS OF CROSS-VALIDATION SELECTED SMOOTHING PARAMETERS IN KERNEL ESTIMATION OF NONPARAMETRIC MODELS

2005 ◽  
Vol 21 (05) ◽  
Author(s):  
Qi Li ◽  
Jianxin Zhou
Author(s):  
Xiao Chen ◽  
Ning Wang

For characterization or optimization process, a computer prediction model is in demand. This paper describes an approach for modeling a delayed coking process using generalized regression neural network (GRNN) and a double-chain based DNA genetic algorithm (dc-DNAGA). In GRNN, the smoothing parameters have significant effect on the performance of the network. This paper presents an improved GA, dc-DNAGA, to optimize the smoothing parameters in GRNN. The dc-DNAGA is inspired by the biological DNA, where the smoothing parameters are coded in the double-chain chromosomes and modified genetic operators are employed to improve the global search ability of GA. To test the performance of the constructed model, it is used to predict the output of the test data which is not included in the training data. Compared with other reported methods, eight cross validation results show the advantage of the proposed technique that it predicts the new data more accurately.


1985 ◽  
Vol 24 (04) ◽  
pp. 218-224
Author(s):  
H.-G. Müller ◽  
P. Ihm

SummaryKernel estimation procedures for the comparison of samples of longitudinal clinical curves are discussed. We show how samples of curves can be compared by means of “empirical parameters” and how a typical longitudinal curve can be defined by “longitudinal averaging”. This method allows to summarize the information in a sample of monotonous curves and to derive local confidence bands for the longitudinal mean curve. Under specific assumptions, this procedure is strongly consistent.As an example, we analyze and compare the long-term behavior of two types of heart pacemakers. Special consideration is given to the practically relevant choice of kernels and of bandwidths (smoothing parameters).


2012 ◽  
Vol 11 (1) ◽  
pp. 64
Author(s):  
Sri Rezeki ◽  
Subanar Subanar ◽  
Suryo Guritno

Model selection in neural networks can be guided by statistical procedures, such as hypothesis tests, informationcriteria and cross validation. Taking a statistical perspective is especially important for nonparametric models likeneural networks, because the reason for applying them is the lack of knowledge about an adequate functionalform. Many researchers have developed model selection strategies for neural networks which are based onstatistical concepts. In this paper, we focused on the model evaluation by implementing statistical significancetest. We used Wald-test to evaluate the relevance of parameters in the networks for classification problem.Parameters with no significance influence on any of the network outputs have to be removed. In general, theresults show that Wald-test work properly to determine significance of each weight from the selected model. Anempirical study by using Iris data yields all parameters in the network are significance, except bias at the firstoutput neuron.


1994 ◽  
Vol 10 (2) ◽  
pp. 1-21 ◽  
Author(s):  
Whitney K. Newey

Econometric applications of kernel estimators are proliferating, suggesting the need for convenient variance estimates and conditions for asymptotic normality. This paper develops a general “delta-method” variance estimator for functionals of kernel estimators. Also, regularity conditions for asymptotic normality are given, along with a guide to verify them for particular estimators. The general results are applied to partial means, which are averages of kernel estimators over some of their arguments with other arguments held fixed. Partial means have econometric applications, such as consumer surplus estimation, and are useful for estimation of additive nonparametric models.


2011 ◽  
Vol 3 (1) ◽  
pp. 9
Author(s):  
Agustini Tripena Br. Sb.

This paper discusses aselection of smoothing parameters for the linier spline regression estimation on the data of electrical voltage differences in the wastewater. The selection methods are based on the mean square errorr (MSE) and generalized cross validation (GCV). The results show that in selection of smooting paranceus the mean square error (MSE) method gives smaller value , than that of the generalized cross validatio (GCV) method. It means that for our data case the errorr mean square (MSE) is the best selection method of smoothing parameter for the linear spline regression estimation.


2009 ◽  
Vol 25 (1) ◽  
pp. 1-42 ◽  
Author(s):  
Desheng Ouyang ◽  
Qi Li ◽  
Jeffrey S. Racine

We consider the problem of estimating a nonparametric regression model containing categorical regressors only. We investigate the theoretical properties of least squares cross-validated smoothing parameter selection, establish the rate of convergence (to zero) of the smoothing parameters for relevant regressors, and show that there is a high probability that the smoothing parameters for irrelevant regressors converge to their upper bound values, thereby automatically smoothing out the irrelevant regressors. A small-scale simulation study shows that the proposed cross-validation-based estimator performs well in finite-sample settings.


2021 ◽  
Vol 2123 (1) ◽  
pp. 012035
Author(s):  
Andi Tenri Ampa ◽  
I Nyoman Budiantara ◽  
Ismaini Zain

Abstract In this article, we propose a new method of selecting smoothing parameters in semiparametric regression. This method is used in semiparametric regression estimation where the nonparametric component is partially approximated by multivariable Fourier Series and partly approached by multivariable Kernel. Selection of smoothing parameters using the method with Generalized Cross-Validation (GCV). To see the performance of this method, it is then applied to the data drinking water quality sourced from Regional Drinking Water Company (PDAM) Surabaya by using Fourier Series with trend and Gaussian Kernel. The results showed that this method contributed a good performance in selecting the optimal smoothing parameters.


Sign in / Sign up

Export Citation Format

Share Document