integrated squared error
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 9)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Vol 7 (1) ◽  
pp. 28
Author(s):  
Rebeca Peláez Suárez ◽  
Ricardo Cao Abad ◽  
Juan M. Vilar Fernández

This work proposes a resampling technique to approximate the smoothing parameter of Beran’s estimator. It is based on resampling by the smoothed bootstrap and minimising the bootstrap approximation of the mean integrated squared error to find the bootstrap bandwidth. The behaviour of this method has been tested by simulation on several models. Bootstrap confidence intervals are also addressed in this research and their performance is analysed in the simulation study.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 2004
Author(s):  
Yi Jin ◽  
Yulin He ◽  
Defa Huang

The nature of the kernel density estimator (KDE) is to find the underlying probability density function (p.d.f) for a given dataset. The key to training the KDE is to determine the optimal bandwidth or Parzen window. All the data points share a fixed bandwidth (scalar for univariate KDE and vector for multivariate KDE) in the fixed KDE (FKDE). In this paper, we propose an improved variable KDE (IVKDE) which determines the optimal bandwidth for each data point in the given dataset based on the integrated squared error (ISE) criterion with the L2 regularization term. An effective optimization algorithm is developed to solve the improved objective function. We compare the estimation performance of IVKDE with FKDE and VKDE based on ISE criterion without L2 regularization on four univariate and four multivariate probability distributions. The experimental results show that IVKDE obtains lower estimation errors and thus demonstrate the effectiveness of IVKDE.


2020 ◽  
Vol 8 (1) ◽  
pp. 221-238
Author(s):  
Yousri Slaoui ◽  
Salah Khardani

AbstractIn this paper, we propose the problem of estimating a regression function recursively based on the minimization of the Mean Squared Relative Error (MSRE), where outlier data are present and the response variable of the model is positive. We construct an alternative estimation of the regression function using a stochastic approximation method. The Bias, variance, and Mean Integrated Squared Error (MISE) are computed explicitly. The asymptotic normality of the proposed estimator is also proved. Moreover, we conduct a simulation to compare the performance of our proposed estimators with that of the two classical kernel regression estimators and then through a real Malaria dataset.


Author(s):  
Malkhaz Shashiashvili

Abstract There is an enormous literature on the so-called Grenander estimator, which is merely the nonparametric maximum likelihood estimator of a nonincreasing probability density on [0, 1] (see, for instance, Grenander (1981)), but unfortunately, there is no nonasymptotic (i.e. for arbitrary finite sample size n) explicit upper bound for the quadratic risk of the Grenander estimator readily applicable in practice by statisticians. In this paper, we establish, for the first time, a simple explicit upper bound 2n−1/2 for the latter quadratic risk. It turns out to be a straightforward consequence of an inequality valid with probability one and bounding from above the integrated squared error of the Grenander estimator by the Kolmogorov–Smirnov statistic.


2019 ◽  
Vol 12 (4) ◽  
pp. 1612-1642
Author(s):  
Didier Alain Njamen Njomen ◽  
Hubert Clovis Yayebga

This article is based on the works of [1], [2] and [3] on the estimation of the survival function and the function of risk in independent cases and identically distributed with and without censorship from which we established the bias and variance of the density of the circular kernel. In addition, we determined the optimal window b ∗ n of this estimator after having first established the mean square error (MSE) and mean integrated squared error (MISE) which are necessary conditions for obtaining the optimal window. Finally, we have established the asymptotic expression of the bias of the risk function of the circular kernel estimator


Author(s):  
Israel Uzuazor SILOKO ◽  
Osayomore IKPOTOKIN ◽  
Edith Akpevwe SILOKO

The usual second order nonparametric kernel estimators are of wide uses in data analysis and visualization but constrained with slow convergence rate. Higher order kernels provide a faster convergence rates and are known to be bias reducing kernels. In this paper, we propose a hybrid of the fourth order kernel which is a merger of two successive fourth order kernels and the statistical properties of these hybrid kernels were study. The results of our simulation reveals that the proposed higher order hybrid kernels outperformed their corresponding parent’s kernel functions using the asymptotic mean integrated squared error.


Author(s):  
Cao Xuan Phuong

Consider the model Y = X + Z , where Y is an observable random variable, X is an unobservable random variable with unknown density f , and Z is a random noise independent of X . The density g of Z is known exactly and assumed to be compactly supported. We are interested in estimating the m- fold convolution fm=f*...*f on the basis of independent and identically distributed (i.i.d.) observations Y1,..,Yn drawn from the distribution of Y . Based on the observations as well as the ridge-parameter regularization method, we propose an estimator for the function fm depending on two regularization parameters in which a parameter is given and a parameter must be chosen. The proposed estimator is shown to be consistent with respect to the mean integrated squared error under some conditions of the parameters. After that we derive a convergence rate of the estimator under some additional regular assumptions for the density f .


2019 ◽  
Vol 23 ◽  
pp. 464-491
Author(s):  
Agnès Lagnoux ◽  
Thi Mong Ngoc Nguyen ◽  
Frédéric Proïa

We investigate in this paper a Bickel–Rosenblatt test of goodness-of-fit for the density of the noise in an autoregressive model. Since the seminal work of Bickel and Rosenblatt, it is well-known that the integrated squared error of the Parzen–Rosenblatt density estimator, once correctly renormalized, is asymptotically Gaussian for independent and identically distributed (i.i.d.) sequences. We show that the result still holds when the statistic is built from the residuals of general stable and explosive autoregressive processes. In the univariate unstable case, we prove that the result holds when the unit root is located at − 1 whereas we give further results when the unit root is located at 1. In particular, we establish that except for some particular asymmetric kernels leading to a non-Gaussian limiting distribution and a slower convergence, the statistic has the same order of magnitude. We also study some common unstable cases, like the integrated seasonal process. Finally, we build a goodness-of-fit Bickel–Rosenblatt test for the true density of the noise together with its empirical properties on the basis of a simulation study.


2017 ◽  
Vol 12 (2) ◽  
pp. 338-349
Author(s):  
Chunhao Cai ◽  
Junyi Guo ◽  
Honglong You

AbstractIn this paper, we propose an estimator of the survival probability for a Lévy risk model observed at low frequency. The estimator is constructed via a regularised version of the inverse of the Laplace transform. The convergence rate of the estimator in a sense of the integrated squared error is studied for large sample size. Simulation studies are also given to show the finite sample performance of our estimator.


Sign in / Sign up

Export Citation Format

Share Document