scholarly journals Multiple hypothesis evaluation in auditing

2002 ◽  
Vol 42 (3) ◽  
pp. 251-277 ◽  
Author(s):  
Rajendra P. Srivastava ◽  
Arnold Wright ◽  
Theodore J. Mock
AERA Open ◽  
2021 ◽  
Vol 7 ◽  
pp. 233285842110285
Author(s):  
Tom Rosman ◽  
Samuel Merk

We investigate in-service teachers’ reasons for trust and distrust in educational research compared to research in general. Building on previous research on a so-called “smart but evil” stereotype regarding educational researchers, three sets of confirmatory hypotheses were preregistered. First, we expected that teachers would emphasize expertise—as compared with benevolence and integrity—as a stronger reason for trust in educational researchers. Moreover, we expected that this pattern would not only apply to educational researchers, but that it would generalize to researchers in general. Furthermore, we hypothesized that the pattern could also be found in the general population. Following a pilot study aiming to establish the validity of our measures (German general population sample; N = 504), hypotheses were tested in an online study with N = 414 randomly sampled German in-service teachers. Using the Bayesian informative hypothesis evaluation framework, we found empirical support for five of our six preregistered hypotheses.


Author(s):  
Chang Yu ◽  
Daniel Zelterman

Abstract We develop the distribution for the number of hypotheses found to be statistically significant using the rule from Simes (Biometrika 73: 751–754, 1986) for controlling the family-wise error rate (FWER). We find the distribution of the number of statistically significant p-values under the null hypothesis and show this follows a normal distribution under the alternative. We propose a parametric distribution ΨI(·) to model the marginal distribution of p-values sampled from a mixture of null uniform and non-uniform distributions under different alternative hypotheses. The ΨI distribution is useful when there are many different alternative hypotheses and these are not individually well understood. We fit ΨI to data from three cancer studies and use it to illustrate the distribution of the number of notable hypotheses observed in these examples. We model dependence in sampled p-values using a latent variable. These methods can be combined to illustrate a power analysis in planning a larger study on the basis of a smaller pilot experiment.


Author(s):  
Damian Clarke ◽  
Joseph P. Romano ◽  
Michael Wolf

When considering multiple-hypothesis tests simultaneously, standard statistical techniques will lead to overrejection of null hypotheses unless the multiplicity of the testing framework is explicitly considered. In this article, we discuss the Romano–Wolf multiple-hypothesis correction and document its implementation in Stata. The Romano–Wolf correction (asymptotically) controls the familywise error rate, that is, the probability of rejecting at least one true null hypothesis among a family of hypotheses under test. This correction is considerably more powerful than earlier multiple-testing procedures, such as the Bonferroni and Holm corrections, given that it takes into account the dependence structure of the test statistics by resampling from the original data. We describe a command, rwolf, that implements this correction and provide several examples based on a wide range of models. We document and discuss the performance gains from using rwolf over other multiple-testing procedures that control the familywise error rate.


Sign in / Sign up

Export Citation Format

Share Document