Non-Standard Tests through a Composite Null and Alternative in Point-Identified Parameters

2015 ◽  
Vol 4 (1) ◽  
pp. 1-28 ◽  
Author(s):  
Jinyong Hahn ◽  
Geert Ridder

AbstractWe propose a new approach to statistical inference on parameters that depend on population parameters in a non-standard way. As examples we consider a parameter that is interval identified and a parameter that is the maximum (or minimum) of population parameters. In both examples we transform the inference problem into a test of a composite null against a composite alternative hypothesis involving point identified population parameters. We use standard tools in this testing problem. This setup substantially simplifies the conceptual basis of the inference problem. By inverting the Likelihood Ratio test statistic for the composite null and composite alternative inference problem, we obtain a closed form expression for the confidence interval that does not require any tuning parameter and is uniformly valid. We use our method to derive a confidence interval for a regression coefficient in a multiple linear regression with an interval censored dependent variable.

Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 936
Author(s):  
Dan Wang

In this paper, a ratio test based on bootstrap approximation is proposed to detect the persistence change in heavy-tailed observations. This paper focuses on the symmetry testing problems of I(1)-to-I(0) and I(0)-to-I(1). On the basis of residual CUSUM, the test statistic is constructed in a ratio form. I prove the null distribution of the test statistic. The consistency under alternative hypothesis is also discussed. However, the null distribution of the test statistic contains an unknown tail index. To address this challenge, I present a bootstrap approximation method for determining the rejection region of this test. Simulation studies of artificial data are conducted to assess the finite sample performance, which shows that our method is better than the kernel method in all listed cases. The analysis of real data also demonstrates the excellent performance of this method.


2011 ◽  
Vol 480-481 ◽  
pp. 775-780
Author(s):  
Ting Jun Li

The area of robust detection in the presence of partly unknown useful signal or interference is a widespread task in many signal processing applications. In this paper, we consider the robustness of a matched subspace detector in additive white Gaussian noise, under the condition that the noise power is known under null hypothesis, and unknown under alternative hypothesis when the useful signal triggers an variation of noise power, and we also consider the mismatch between the signal subspace and receiver matched filter. The test statistic of this detection problem is derived based on generalized likelihood ratio test, and the distribution of the test statistic is analysis. The computer simulation is used to validate the performance analysis and the robustness of this algorithm at low SNR, compared with other matched subspace detectors.


2011 ◽  
Vol 24 (19) ◽  
pp. 5094-5107 ◽  
Author(s):  
Timothy DelSole ◽  
Xiaosong Yang

Regression patterns often are used to diagnose the relation between a field and a climate index, but a significance test for the pattern “as a whole” that accounts for the multiplicity and interdependence of the tests has not been widely available. This paper argues that field significance can be framed as a test of the hypothesis that all regression coefficients vanish in a suitable multivariate regression model. A test for this hypothesis can be derived from the generalized likelihood ratio test. The resulting statistic depends on relevant covariance matrices and accounts for the multiplicity and interdependence of the tests. It also depends only on the canonical correlations between the predictors and predictands, thereby revealing a fundamental connection to canonical correlation analysis. Remarkably, the test statistic is invariant to a reversal of the predictors and predictands, allowing the field significance test to be reduced to a standard univariate hypothesis test. In practice, the test cannot be applied when the number of coefficients exceeds the sample size, reflecting the fact that testing more hypotheses than data is ill conceived. To formulate a proper significance test, the data are represented by a small number of principal components, with the number chosen based on cross-validation experiments. However, instead of selecting the model that minimizes the cross-validated mean square error, a confidence interval for the cross-validated error is estimated and the most parsimonious model whose error is within the confidence interval of the minimum error is chosen. This procedure avoids selecting complex models whose error is close to much simpler models. The procedure is applied to diagnose long-term trends in annual average sea surface temperature and boreal winter 300-hPa zonal wind. In both cases a statistically significant 50-yr trend pattern is extracted. The resulting spatial filter can be used to monitor the evolution of the regression pattern without temporal filtering.


1995 ◽  
Vol 03 (01) ◽  
pp. 13-25 ◽  
Author(s):  
MARGARET GELDER EHM ◽  
MAREK KIMMEL ◽  
ROBERT W. COTTINGHAM

The occurrence of laboratory typing error in pedigree data collected for use in linkage analysis cannot be ignored. In maps where recombinations between nearby markers rarely occur, each erroneous recombinations (result of typing error) is given substantial weight thereby increasing the estimate of θ, the recombination fraction. As the maps being developed become more dense, θ approaches the error rate and most of all observed crossovers will be erroneous. We present a method for detecting errors in pedigree data. The index is a variant of the likelihood ratio test statistic and is used to test the null hypothesis of no error for an individual at a locus versus the alternative hypothesis of error. High values of the index correspond to unlikely genotypes. The method has been shown to detect errors introduced into CEPH pedigrees and an error in a larger experimental pedigree (retinitis pigmentosa). While the method was designed to detect typing error, it is sufficiently general to detect any relatively unlikely genotype and therefore can also be used to detect pedigree error.


2004 ◽  
Vol 61 (7) ◽  
pp. 1269-1284 ◽  
Author(s):  
RIC Chris Francis ◽  
Steven E Campana

In 1985, Boehlert (Fish. Bull. 83: 103–117) suggested that fish age could be estimated from otolith measurements. Since that time, a number of inferential techniques have been proposed and tested in a range of species. A review of these techniques shows that all are subject to at least one of four types of bias. In addition, they all focus on assigning ages to individual fish, whereas the estimation of population parameters (particularly proportions at age) is usually the goal. We propose a new flexible method of inference based on mixture analysis, which avoids these biases and makes better use of the data. We argue that the most appropriate technique for evaluating the performance of these methods is a cost–benefit analysis that compares the cost of the estimated ages with that of the traditional annulus count method. A simulation experiment is used to illustrate both the new method and the cost–benefit analysis.


Sign in / Sign up

Export Citation Format

Share Document