A General formula for the upper tail significance levels of empirical distribution function test statistics

1988 ◽  
Vol 17 (12) ◽  
pp. 4121-4131 ◽  
Author(s):  
B.D. Spurr ◽  
C.D. Sinclair
2009 ◽  
Vol 12 (02) ◽  
pp. 157-167 ◽  
Author(s):  
MARCO CAPASSO ◽  
LUCIA ALESSI ◽  
MATTEO BARIGOZZI ◽  
GIORGIO FAGIOLO

This paper discusses some problems possibly arising when approximating via Monte-Carlo simulations the distributions of goodness-of-fit test statistics based on the empirical distribution function. We argue that failing to re-estimate unknown parameters on each simulated Monte-Carlo sample — and thus avoiding to employ this information to build the test statistic — may lead to wrong, overly-conservative. Furthermore, we present some simple examples suggesting that the impact of this possible mistake may turn out to be dramatic and does not vanish as the sample size increases.


2021 ◽  
Vol 2068 (1) ◽  
pp. 012003
Author(s):  
Ayari Samia ◽  
Mohamed Boutahar

Abstract The purpose of this paper is estimating the dependence function of multivariate extreme values copulas. Different nonparametric estimators are developed in the literature assuming that marginal distributions are known. However, this assumption is unrealistic in practice. To overcome the drawbacks of these estimators, we substituted the extreme value marginal distribution by the empirical distribution function. Monte Carlo experiments are carried out to compare the performance of the Pickands, Deheuvels, Hall-Tajvidi, Zhang and Gudendorf-Segers estimators. Empirical results showed that the empirical distribution function improved the estimators’ performance for different sample sizes.


Author(s):  
M. D. Edge

Nonparametric and semiparametric statistical methods assume models whose properties cannot be described by a finite number of parameters. For example, a linear regression model that assumes that the disturbances are independent draws from an unknown distribution is semiparametric—it includes the intercept and slope as regression parameters but has a nonparametric part, the unknown distribution of the disturbances. Nonparametric and semiparametric methods focus on the empirical distribution function, which, assuming that the data are really independent observations from the same distribution, is a consistent estimator of the true cumulative distribution function. In this chapter, with plug-in estimation and the method of moments, functionals or parameters are estimated by treating the empirical distribution function as if it were the true cumulative distribution function. Such estimators are consistent. To understand the variation of point estimates, bootstrapping is used to resample from the empirical distribution function. For hypothesis testing, one can either use a bootstrap-based confidence interval or conduct a permutation test, which can be designed to test null hypotheses of independence or exchangeability. Resampling methods—including bootstrapping and permutation testing—are flexible and easy to implement with a little programming expertise.


Sign in / Sign up

Export Citation Format

Share Document