scholarly journals On fuzzy familywise error rate and false discovery rate procedures for discrete distributions

Biometrika ◽  
2009 ◽  
Vol 96 (1) ◽  
pp. 201-211 ◽  
Author(s):  
E. Kulinskaya ◽  
A. Lewin
Genetics ◽  
2002 ◽  
Vol 161 (2) ◽  
pp. 905-914 ◽  
Author(s):  
Hakkyo Lee ◽  
Jack C M Dekkers ◽  
M Soller ◽  
Massoud Malek ◽  
Rohan L Fernando ◽  
...  

Abstract Controlling the false discovery rate (FDR) has been proposed as an alternative to controlling the genomewise error rate (GWER) for detecting quantitative trait loci (QTL) in genome scans. The objective here was to implement FDR in the context of regression interval mapping for multiple traits. Data on five traits from an F2 swine breed cross were used. FDR was implemented using tests at every 1 cM (FDR1) and using tests with the highest test statistic for each marker interval (FDRm). For the latter, a method was developed to predict comparison-wise error rates. At low error rates, FDR1 behaved erratically; FDRm was more stable but gave similar significance thresholds and number of QTL detected. At the same error rate, methods to control FDR gave less stringent significance thresholds and more QTL detected than methods to control GWER. Although testing across traits had limited impact on FDR, single-trait testing was recommended because there is no theoretical reason to pool tests across traits for FDR. FDR based on FDRm was recommended for QTL detection in interval mapping because it provides significance tests that are meaningful, yet not overly stringent, such that a more complete picture of QTL is revealed.


Biometrika ◽  
2020 ◽  
Vol 107 (3) ◽  
pp. 761-768 ◽  
Author(s):  
E Dobriban

Summary Multiple hypothesis testing problems arise naturally in science. This note introduces a new fast closed testing method for multiple testing which controls the familywise error rate. Controlling the familywise error rate is state-of-the-art in many important application areas and is preferred over false discovery rate control for many reasons, including that it leads to stronger reproducibility. The closure principle rejects an individual hypothesis if all global nulls of subsets containing it are rejected using some test statistics. It takes exponential time in the worst case. When the tests are symmetric and monotone, the proposed method is an exact algorithm for computing the closure, is quadratic in the number of tests, and is linear in the number of discoveries. Our framework generalizes most examples of closed testing, such as Holm’s method and the Bonferroni method. As a special case of the method, we propose the Simes and higher criticism fusion test, which is powerful both for detecting a few strong signals and for detecting many moderate signals.


Author(s):  
Christopher R Tench

AbstractThere are many methods of conducting coordinate based meta-analysis (CBMA) of neuroimaging studies that have tested a common hypothesis. Results are always clusters indicating anatomical regions that are significantly related to the hypothesis. There are limitations such as most methods necessitating the use of conservative family wise error control scheme and the inability to analyse region of interest (ROI) studies, which can be overcome by cluster-wise, rather than voxel-wise, analysis. The false discovery rate error control scheme is a less conservative option suitable for cluster-wise analysis and has the advantage that an easily interpretable error rate is estimated. Furthermore, cluster-wise analysis makes it possible to analyse ROI studies, expanding the pool of data sources. Here a new clustering algorithm for coordinate based analyses is detailed, along with implementation details for ROI studies.


BMC Genetics ◽  
2005 ◽  
Vol 6 (Suppl 1) ◽  
pp. S134 ◽  
Author(s):  
Qiong Yang ◽  
Jing Cui ◽  
Irmarie Chazaro ◽  
L Adrienne Cupples ◽  
Serkalem Demissie

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Chi-Hong Tseng ◽  
Yongzhao Shao

An appropriate sample size is crucial for the success of many studies that involve a large number of comparisons. Sample size formulas for testing multiple hypotheses are provided in this paper. They can be used to determine the sample sizes required to provide adequate power while controlling familywise error rate or false discovery rate, to derive the growth rate of sample size with respect to an increasing number of comparisons or decrease in effect size, and to assess reliability of study designs. It is demonstrated that practical sample sizes can often be achieved even when adjustments for a large number of comparisons are made as in many genomewide studies.


Author(s):  
Jeong-Seok Choi

Multiple testings are instances that contain simultaneous tests for more than one hypothesis. When multiple testings are conducted at the same time, it is more likely that the null hypothesis is rejected, even if the null hypothesis is correct. If individual hypothesis decisions are based on unadjusted <i>p</i>-values, it is usually more likely that some of the true null hypotheses will be rejected. In order to solve the multiple testing problems, various studies have attempted to increase the power by taking into account the family-wise error rate or false discovery rate and statistics required for testing hypotheses. This article discuss methods that account for the multiplicity issue and introduces various statistical techniques.


BMC Genetics ◽  
2005 ◽  
Vol 6 (Suppl 1) ◽  
pp. S23 ◽  
Author(s):  
Ritwik Sinha ◽  
Moumita Sinha ◽  
George Mathew ◽  
Robert C Elston ◽  
Yuqun Luo

Sign in / Sign up

Export Citation Format

Share Document