scholarly journals Monitoring Persistence Change in Heavy-Tailed Observations

Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 936
Author(s):  
Dan Wang

In this paper, a ratio test based on bootstrap approximation is proposed to detect the persistence change in heavy-tailed observations. This paper focuses on the symmetry testing problems of I(1)-to-I(0) and I(0)-to-I(1). On the basis of residual CUSUM, the test statistic is constructed in a ratio form. I prove the null distribution of the test statistic. The consistency under alternative hypothesis is also discussed. However, the null distribution of the test statistic contains an unknown tail index. To address this challenge, I present a bootstrap approximation method for determining the rejection region of this test. Simulation studies of artificial data are conducted to assess the finite sample performance, which shows that our method is better than the kernel method in all listed cases. The analysis of real data also demonstrates the excellent performance of this method.

2020 ◽  
pp. 1-45
Author(s):  
Feng Yao ◽  
Taining Wang

We propose a nonparametric test of significant variables in the partial derivative of a regression mean function. The derivative is estimated by local polynomial estimation and the test statistic is constructed through a variation-based measure of the derivative in the direction of variables of interest. We establish the asymptotic null distribution of the test statistic and demonstrate that it is consistent. Motivated by the null distribution, we propose a wild bootstrap test, and show that it exhibits the same null distribution, whether the null is valid or not. We perform a Monte Carlo study to demonstrate its encouraging finite sample performance. An empirical application is conducted showing how the test can be applied to infer certain aspects of regression structures in a hedonic price model.


2020 ◽  
Vol 117 (29) ◽  
pp. 16880-16890 ◽  
Author(s):  
Larry Wasserman ◽  
Aaditya Ramdas ◽  
Sivaraman Balakrishnan

We propose a general method for constructing confidence sets and hypothesis tests that have finite-sample guarantees without regularity conditions. We refer to such procedures as “universal.” The method is very simple and is based on a modified version of the usual likelihood-ratio statistic that we call “the split likelihood-ratio test” (split LRT) statistic. The (limiting) null distribution of the classical likelihood-ratio statistic is often intractable when used to test composite null hypotheses in irregular statistical models. Our method is especially appealing for statistical inference in these complex setups. The method we suggest works for any parametric model and also for some nonparametric models, as long as computing a maximum-likelihood estimator (MLE) is feasible under the null. Canonical examples arise in mixture modeling and shape-constrained inference, for which constructing tests and confidence sets has been notoriously difficult. We also develop various extensions of our basic methods. We show that in settings when computing the MLE is hard, for the purpose of constructing valid tests and intervals, it is sufficient to upper bound the maximum likelihood. We investigate some conditions under which our methods yield valid inferences under model misspecification. Further, the split LRT can be used with profile likelihoods to deal with nuisance parameters, and it can also be run sequentially to yield anytime-valid P values and confidence sequences. Finally, when combined with the method of sieves, it can be used to perform model selection with nested model classes.


2019 ◽  
Vol 7 (1) ◽  
pp. 394-417
Author(s):  
Aboubacrène Ag Ahmad ◽  
El Hadji Deme ◽  
Aliou Diop ◽  
Stéphane Girard

AbstractWe introduce a location-scale model for conditional heavy-tailed distributions when the covariate is deterministic. First, nonparametric estimators of the location and scale functions are introduced. Second, an estimator of the conditional extreme-value index is derived. The asymptotic properties of the estimators are established under mild assumptions and their finite sample properties are illustrated both on simulated and real data.


2018 ◽  
Vol 28 (9) ◽  
pp. 2868-2875
Author(s):  
Zhongxue Chen ◽  
Qingzhong Liu ◽  
Kai Wang

Several gene- or set-based association tests have been proposed recently in the literature. Powerful statistical approaches are still highly desirable in this area. In this paper we propose a novel statistical association test, which uses information of the burden component and its complement from the genotypes. This new test statistic has a simple null distribution, which is a special and simplified variance-gamma distribution, and its p-value can be easily calculated. Through a comprehensive simulation study, we show that the new test can control type I error rate and has superior detecting power compared with some popular existing methods. We also apply the new approach to a real data set; the results demonstrate that this test is promising.


2014 ◽  
Vol 2014 ◽  
pp. 1-13
Author(s):  
Junhua Zhang ◽  
Ruiqin Tian ◽  
Suigen Yang ◽  
Sanying Feng

For the marginal longitudinal generalized linear models (GLMs), we develop the empirical Cressie-Read (ECR) test statistic approach which has been proposed for the independent identically distributed (i.i.d.) case. The ECR test statistic includes empirical likelihood as a special case. By adopting this ECR test statistic approach and taking into account the within-subject correlation, the efficiency theory results of estimation and testing based on ECR are established under some regularity conditions. Although a working correlation matrix is assumed, there is no need to estimate the nuisance parameters in the working correlation matrix based on the quadratic inference function (QIF). Therefore, the proposed ECR test statistic is asymptotically a standardχ2limit under the null hypothesis. It is shown that the proposed method is more efficient even when the working correlation matrix is misspecified. We also evaluate the finite sample performance of the proposed methods via simulation studies and a real data analysis.


2011 ◽  
Vol 480-481 ◽  
pp. 775-780
Author(s):  
Ting Jun Li

The area of robust detection in the presence of partly unknown useful signal or interference is a widespread task in many signal processing applications. In this paper, we consider the robustness of a matched subspace detector in additive white Gaussian noise, under the condition that the noise power is known under null hypothesis, and unknown under alternative hypothesis when the useful signal triggers an variation of noise power, and we also consider the mismatch between the signal subspace and receiver matched filter. The test statistic of this detection problem is derived based on generalized likelihood ratio test, and the distribution of the test statistic is analysis. The computer simulation is used to validate the performance analysis and the robustness of this algorithm at low SNR, compared with other matched subspace detectors.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 132
Author(s):  
Feng Li ◽  
Yajie Li ◽  
Sanying Feng

The varying coefficient (VC) model is a generalization of ordinary linear model, which can not only retain strong interpretability but also has the flexibility of the nonparametric model. In this paper, we investigate a VC model with hierarchical structure. A unified variable selection method for VC model is proposed, which can simultaneously select the nonzero effects and estimate the unknown coefficient functions. Meanwhile, the selected model enforces the hierarchical structure, that is, interaction terms can be selected into the model only if the corresponding main effects are in the model. The kernel method is employed to estimate the varying coefficient functions, and a combined overlapped group Lasso regularization is introduced to implement variable selection to keep the hierarchical structure. It is proved that the proposed penalty estimators have oracle properties, that is, the coefficients are estimated as well as if the true model were known in advance. Simulation studies and a real data analysis are carried out to examine the performance of the proposed method in finite sample case.


Author(s):  
Russell Cheng

This chapter discusses models like the exponential regression model y = a[1− exp(− bx)] where if a = 0 then b is an indeterminate, non-identifiable parameter, as it vanishes from the model. The hypothesis test that H0 : a = 0 versus H1 : a ≠ 0 is then non-standard. The well-known Davies test is explained. This uses a portmanteau test statistic T that is a functional of Sn(b), L< b< U, where Sn(b) is a regular test statistic of the null hypothesis a = 0 versus the alternative a ≠ 0 with b fixed. The null distribution of T is not usually easy to obtain. One can instead just test if a = 0 using a GoF test or a lack-of-fit test with an alternative hypothesis not specified. In the exponential regression example, this means simply testing if the observations are solely pure error. This elementary approach is compared with the Davies approach.


2015 ◽  
Vol 4 (1) ◽  
pp. 1-28 ◽  
Author(s):  
Jinyong Hahn ◽  
Geert Ridder

AbstractWe propose a new approach to statistical inference on parameters that depend on population parameters in a non-standard way. As examples we consider a parameter that is interval identified and a parameter that is the maximum (or minimum) of population parameters. In both examples we transform the inference problem into a test of a composite null against a composite alternative hypothesis involving point identified population parameters. We use standard tools in this testing problem. This setup substantially simplifies the conceptual basis of the inference problem. By inverting the Likelihood Ratio test statistic for the composite null and composite alternative inference problem, we obtain a closed form expression for the confidence interval that does not require any tuning parameter and is uniformly valid. We use our method to derive a confidence interval for a regression coefficient in a multiple linear regression with an interval censored dependent variable.


2015 ◽  
Vol 13 (1) ◽  
pp. 40
Author(s):  
Fernanda Maria Muller ◽  
Fábio Mariano Bayer

The Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the parameters of the model are based on maximum likelihood method. These estimators have good asymptotic properties, however in finite sample sizes their performance can be poor. With the purpose of evaluating the finite sample performance of point estimators and of the likelihood ratio test proposed to the presence of two components of volatility, we present a Monte Carlo simulation study. Numerical results indicate that the maximum likelihood estimators of some parameters of the model are considerably biased in sample sizes smaller than 3000. The evaluation results of the proposed two-component test, in terms of size and power of the test, showed its practical usefulness in sample sizes greater than 3000. At the end of the work we present an application in a real data of the proposed two-component test and the model Beta-Skew-t-EGARCH.


Sign in / Sign up

Export Citation Format

Share Document