scholarly journals Optimal testing of statistical hypotheses and multiple familywise error rates

Filomat ◽  
2016 ◽  
Vol 30 (3) ◽  
pp. 681-688
Author(s):  
Farshin Hormozinejad

In this article the author considers the statistical hypotheses testing to make decision among hypotheses concerning many families of probability distributions. The statistician would like to control the overall error rate relative to draw statistically valid conclusions from each test, while being as efficient as possible. The familywise error (FWE) rate metric and the hypothesis test procedure while controlling both the type I and II FWEs are generalized. The proposed procedure shows simultaneous more reliability and less conservative error control relative to fixed sample and other recently proposed sequential procedures. Also, the characteristics of logarithmically asymptotically optimal (LAO) hypotheses testing are studied. The purpose of research is to express the optimal functional relation among the reliabilities of LAO hypotheses testing and to judge with FWE metric.

2016 ◽  
Vol 09 (03) ◽  
pp. 1650050
Author(s):  
Farshin Hormozinejad

The multiple statistical hypotheses two-stage testing with possibility of rejecting of decision to make choice between hypotheses concerning the pair of groups of probability distributions is considered such that in the first stage one group of distributions is distinguished and then in the second stage, the true distribution is denoted between mentioned group of probability distributions. Description of characteristics of logarithmically asymptotically optimal (LAO) hypotheses testing with possibility of decision rejection and the matrix of optimal asymptotically interdependencies of all pairs of the error probability exponents or reliabilities are studied. The goal of research is to express the optimal functional relation between the reliabilities of LAO hypotheses testing by a pair of stages and to compare with the case of similar one-stage testing.


2018 ◽  
pp. 110-114
Author(s):  
Evgueni Haroutunian ◽  
Aram Yesayan ◽  
Narine Harutyunyan

Multiple statistical hypotheses testing with possibility of rejecting of decisionis considered for model consisting of two dependent objects characterized by joint discrete probability distribution. The matrix of error probabilities exponents (reliabilities) of asymptotically optimal tests is studied.


Author(s):  
K. J. KACHIASHVILI

There are different methods of statistical hypotheses testing.1–4 Among them, is Bayesian approach. A generalization of Bayesian rule of many hypotheses testing is given below. It consists of decision rule dimensionality with respect to the number of tested hypotheses, which allows to make decisions more differentiated than in the classical case and to state, instead of unconstrained optimization problem, constrained one that enables to make guaranteed decisions concerning errors of true decisions rejection, which is the key point when solving a number of practical problems. These generalizations are given both for a set of simple hypotheses, each containing one space point, and hypotheses containing a finite set of separated space points.


Author(s):  
Timo Kuosmanen ◽  
Natalia Kuosmanen

Sustainable Value Analysis (SVA) [F. Figge, T. Hahn, Ecol. Econ. 48(2004) 173-187] is a method formeasuring sustainability performance consistent with the constant capital rule and strongsustainability. SVA compares eco-efficiency of a firm relative to some benchmark. The choice of thebenchmark implies some assumptions regarding the underlying production technology. This paperpresents a rigorous examination of the role of benchmark technology in SVA. We show that Figge andHahn’s formula for calculating sustainable value implies a peculiar linear benchmark technology. Wepresent a generalized formulation of sustainable value that is not restricted to any particular functionalform and allows for estimating benchmark technology from empirical data. Our generalized SVAformulation reveals a direct link between SVA and frontier approaches to environmental performancemeasurement and facilitates statistical hypotheses testing concerning the benchmark.


2020 ◽  
Author(s):  
Janet Aisbett ◽  
Daniel Lakens ◽  
Kristin Sainani

Magnitude based inference (MBI) was widely adopted by sport science researchers as an alternative to null hypothesis significance tests. It has been criticized for lacking a theoretical framework, mixing Bayesian and frequentist thinking, and encouraging researchers to run small studies with high Type 1 error rates. MBI terminology describes the position of confidence intervals in relation to smallest meaningful effect sizes. We show these positions correspond to combinations of one-sided tests of hypotheses about the presence or absence of meaningful effects, and formally describe MBI as a multiple decision procedure. MBI terminology operates as if tests are conducted at multiple alpha levels. We illustrate how error rates can be controlled by limiting each one-sided hypothesis test to a single alpha level. To provide transparent error control in a Neyman-Pearson framework and encourage the use of standard statistical software, we recommend replacing MBI with one-sided tests against smallest meaningful effects, or pairs of such tests as in equivalence testing. Researchers should pre-specify their hypotheses and alpha levels, perform a priori sample size calculations, and justify all assumptions. Our recommendations show researchers what tests to use and how to design and report their statistical analyses to accord with standard frequentist practice.


2020 ◽  
Author(s):  
Corey Peltier ◽  
Reem Muharib ◽  
April Haas ◽  
Art Dowdy

Single-case research designs (SCRDs) are used to evaluate functional relations between an independent variable and dependent variable(s). When analyzing data related to autism spectrum disorder, SCRDs are frequently used. Namely, SCRDs allow for empirical evidence in support of practices that improve socially significant outcomes for individuals diagnosed with ASD. To determine a functional relation in SCRDs, a time-series graph is constructed and visual analysts evaluate data patterns. Preliminary evidence suggest that the approach used to scale the ordinate (i.e., y-axis) and the proportions of the x-axis length to y-axis height (i.e., data points per x- to y-axis ratio) impact visual analysts’ decisions regarding a functional relation and the magnitude of treatment effect, resulting in an increased likelihood of a Type I errors. The purpose for this systematic review was to evaluate all time-series graphs published in the last decade (i.e., 2010-2020) in four premier journals in the field of ASD: Journal of Autism and Developmental Disorders, Research in Autism Spectrum Disorders, Autism, and Focus on Autism and Other Developmental Disabilities. The systematic search yielded 348 articles including 2,675 graphs. We identified large variation across and within types of SCRDs for the standardized X:Y and DPPXYR. In addition, 73% of graphs were below a DPPXYR of 0.14, providing evidence of the Type I error rate. A majority of graphs used an appropriate ordinate scaling method that would not increase Type I error rates. Implications for future research and practice are provided.


Author(s):  
Richard McCleary ◽  
David McDowall ◽  
Bradley J. Bartos

Chapter 6 addresses the sub-category of internal validity defined by Shadish et al., as statistical conclusion validity, or “validity of inferences about the correlation (covariance) between treatment and outcome.” The common threats to statistical conclusion validity can arise, or become plausible through either model misspecification or through hypothesis testing. The risk of a serious model misspecification is inversely proportional to the length of the time series, for example, and so is the risk of mistating the Type I and Type II error rates. Threats to statistical conclusion validity arise from the classical and modern hybrid significance testing structures, the serious threats that weigh heavily in p-value tests are shown to be undefined in Beyesian tests. While the particularly vexing threats raised by modern null hypothesis testing are resolved through the elimination of the modern null hypothesis test, threats to statistical conclusion validity would inevitably persist and new threats would arise.


Sign in / Sign up

Export Citation Format

Share Document