bonferroni's inequality
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

2019 ◽  
Vol 65 (2) ◽  
pp. 155-172
Author(s):  
Anna Staszewska-Bystrova

Joint prediction bands are often constructed using Bonferroni’s inequality. The drawback of such bands may be their large width and excessive coverage probability. The paper proposes two refinements to the basic Bonferroni method of constructing bootstrap prediction bands. These are based on higher order inequalities and optimization of the width of the band. The procedures are applied to the problem of predicting univariate autoregressive processes. Their properties are studied by means of Monte Carlo experiments. It is shown that the proposed methods lead, in many scenarios, to obtaining relatively narrow prediction bands with desired coverage probabilities.


2013 ◽  
Vol 4 (1) ◽  
pp. 20 ◽  
Author(s):  
Robert S. Rodger ◽  
Mark Roberts

The number of methods for evaluating, and possibly making statistical decisions about, null contrasts - or their small sub-set, multiple comparisons - has grown extensively since the early 1950s. That demonstrates how important the subject is, but most of the growth consists of modest variations of the early methods. This paper examines nine fairly basic procedures, six of which are methods designed to evaluate contrasts chosen post hoc, i.e., after an examination of the test data. Three of these use experimentwise or familywise type 1 error rates (Scheffé 1953, Tukey 1953, Newman-Keuls, 1939 and 1952), two use decision-based type 1 error rates (Duncan 1951 and Rodger 1975a) and one (Fisher's LSD 1935) uses a mixture of the two type 1 error rate definitions. The other three methods examined are for evaluating, and possibly deciding about, a limited number of null contrasts that have been chosen independently of the sample data - preferably before the data are collected. One of these (planned t-tests) uses decision-based type 1 error rates and the other two (one based on Bonferroni's Inequality 1936, and the other Dunnett's 1964 Many-One procedure) use a familywise type 1 error rate. The use of these different type 1 error rate definitionsA creates quite large discrepancies in the capacities of the methods to detect true non-zero effects in the contrasts being evaluated. This article describes those discrepancies in power and, especially, how they are exacerbated by increases in the size of an investigation (i.e., an increase in J, the number of samples being examined). It is also true that the capacity of a multiple contrast procedure to 'unpick' 'true' differences from the sample data is influenced by the type of contrast the procedure permits. For example, multiple range procedures (such as that of Newman-Keuls and that of Duncan) permit only comparisons (i.e., two-group differences) and that greatly limits their discriminating capacity (which is not, technically speaking, their power). Many methods (those of Scheffé, Tukey's HSD, Newman-Keuls, Fisher's LSD, Bonferroni and Dunnett) place their emphasis on one particular question, "Are there any differences at all among the groups?" Some other procedures concentrate on individual contrasts (i.e., those of Duncan, Rodger and Planned Contrasts); so are more concerned with how many false null contrasts the method can detect. This results in two basically different definitions of detection capacity. Finally, there is a categorical difference between what post hoc methods and those evaluating pre-planned contrasts can find. The success of the latter depends on how wisely (or honestly well informed) the user has been in planning the limited number of statistically revealing contrasts to test. That can greatly affect the method's discriminating success, but it is often not included in power evaluations. These matters are elaborated upon as they arise in the exposition below. DOI:10.2458/azu_jmmss_v4i1_rodger


Author(s):  
Robert S. Rodger ◽  
Mark Roberts

The number of methods for evaluating, and possibly making statistical decisions about, null contrasts - or their small sub-set, multiple comparisons - has grown extensively since the early 1950s. That demonstrates how important the subject is, but most of the growth consists of modest variations of the early methods. This paper examines nine fairly basic procedures, six of which are methods designed to evaluate contrasts chosen post hoc, i.e., after an examination of the test data. Three of these use experimentwise or familywise type 1 error rates (Scheffé 1953, Tukey 1953, Newman-Keuls, 1939 and 1952), two use decision-based type 1 error rates (Duncan 1951 and Rodger 1975a) and one (Fisher's LSD 1935) uses a mixture of the two type 1 error rate definitions. The other three methods examined are for evaluating, and possibly deciding about, a limited number of null contrasts that have been chosen independently of the sample data - preferably before the data are collected. One of these (planned t-tests) uses decision-based type 1 error rates and the other two (one based on Bonferroni's Inequality 1936, and the other Dunnett's 1964 Many-One procedure) use a familywise type 1 error rate. The use of these different type 1 error rate definitionsA creates quite large discrepancies in the capacities of the methods to detect true non-zero effects in the contrasts being evaluated. This article describes those discrepancies in power and, especially, how they are exacerbated by increases in the size of an investigation (i.e., an increase in J, the number of samples being examined). It is also true that the capacity of a multiple contrast procedure to 'unpick' 'true' differences from the sample data is influenced by the type of contrast the procedure permits. For example, multiple range procedures (such as that of Newman-Keuls and that of Duncan) permit only comparisons (i.e., two-group differences) and that greatly limits their discriminating capacity (which is not, technically speaking, their power). Many methods (those of Scheffé, Tukey's HSD, Newman-Keuls, Fisher's LSD, Bonferroni and Dunnett) place their emphasis on one particular question, "Are there any differences at all among the groups?" Some other procedures concentrate on individual contrasts (i.e., those of Duncan, Rodger and Planned Contrasts); so are more concerned with how many false null contrasts the method can detect. This results in two basically different definitions of detection capacity. Finally, there is a categorical difference between what post hoc methods and those evaluating pre-planned contrasts can find. The success of the latter depends on how wisely (or honestly well informed) the user has been in planning the limited number of statistically revealing contrasts to test. That can greatly affect the method's discriminating success, but it is often not included in power evaluations. These matters are elaborated upon as they arise in the exposition below. DOI:10.2458/azu_jmmss_v4i1_rodger


1987 ◽  
Vol 12 (4) ◽  
pp. 307-308
Author(s):  
Shigeru Fujibayashi ◽  
Kazutoshi Okano ◽  
Chieko Nawa ◽  
Satoshi Suzuki ◽  
Kazuko Sasagawa ◽  
...  

1986 ◽  
Vol 11 (3) ◽  
pp. 197-205
Author(s):  
John F. Bell

This paper demonstrates a method, derived by Khuri (1981) , of constructing simultaneous confidence intervals for functions of expected values of mean squares obtained when analyzing a balanced design by a random effects linear model. The method may be applied to obtain confidence intervals for the variance components and other linear functions of the expected mean squares used in generalizability theory, with probability of simultaneous coverage guaranteed to be greater than or equal to the specified confidence coefficient. The Khuri intervals are compared with the approximate intervals obtained by using Satterthwaite’s (1941 , 1946) method in conjunction with Bonferroni’s inequality.


Two problems are addressed: ( a ) the detection of outliers in (Doppler satellite) observations, and ( b ) the testing of coordinates in (Doppler satellite) networks. In both problems, confidence regions of the ‘out of context’ and ‘within context’ varieties are developed, and it is shown that the latter are in general about 1 1/2 times larger than the former (conventional) confidence regions. On the basis of this comparison, it is speculated that good data and results are being erroneously rejected. Also it is demonstrated, through the use of Bonferroni’s inequality, that discarding covariances among residuals and discarding cross-covariances among station coordinates each results in a confidence level being greater than 1 — x, the conventionally chosen level. As a final development, a link is made not only between univariate and multivariate testing for outliers among observations but also between testing in observation space and testing in parameter space. The implications of these developments for Doppler satellite positioning are given.


Sign in / Sign up

Export Citation Format

Share Document