scholarly journals The Direct Assignment Option as a Modular Design Component: An Example for the Setting of Two Predefined Subgroups

2015 ◽  
Vol 2015 ◽  
pp. 1-6 ◽  
Author(s):  
Ming-Wen An ◽  
Xin Lu ◽  
Daniel J. Sargent ◽  
Sumithra J. Mandrekar

Background. A phase II design with an option for direct assignment (stop randomization and assign all patients to experimental treatment based on interim analysis, IA) for a predefined subgroup was previously proposed. Here, we illustrate the modularity of the direct assignment option by applying it to the setting of two predefined subgroups and testing for separate subgroup main effects.Methods. We power the 2-subgroup direct assignment option design with 1 IA (DAD-1) to test for separate subgroup main effects, with assessment of power to detect an interaction in a post-hoc test. Simulations assessed the statistical properties of this design compared to the 2-subgroup balanced randomized design with 1 IA, BRD-1. Different response rates for treatment/control in subgroup 1 (0.4/0.2) and in subgroup 2 (0.1/0.2, 0.4/0.2) were considered.Results. The 2-subgroup DAD-1 preserves power and type I error rate compared to the 2-subgroup BRD-1, while exhibiting reasonable power in a post-hoc test for interaction.Conclusion. The direct assignment option is a flexible design component that can be incorporated into broader design frameworks, while maintaining desirable statistical properties, clinical appeal, and logistical simplicity.

2000 ◽  
Vol 14 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Joni Kettunen ◽  
Niklas Ravaja ◽  
Liisa Keltikangas-Järvinen

Abstract We examined the use of smoothing to enhance the detection of response coupling from the activity of different response systems. Three different types of moving average smoothers were applied to both simulated interbeat interval (IBI) and electrodermal activity (EDA) time series and to empirical IBI, EDA, and facial electromyography time series. The results indicated that progressive smoothing increased the efficiency of the detection of response coupling but did not increase the probability of Type I error. The power of the smoothing methods depended on the response characteristics. The benefits and use of the smoothing methods to extract information from psychophysiological time series are discussed.


Author(s):  
Zaheer Ahmed ◽  
Alberto Cassese ◽  
Gerard van Breukelen ◽  
Jan Schepers

AbstractWe present a novel method, REMAXINT, that captures the gist of two-way interaction in row by column (i.e., two-mode) data, with one observation per cell. REMAXINT is a probabilistic two-mode clustering model that yields two-mode partitions with maximal interaction between row and column clusters. For estimation of the parameters of REMAXINT, we maximize a conditional classification likelihood in which the random row (or column) main effects are conditioned out. For testing the null hypothesis of no interaction between row and column clusters, we propose a $$max-F$$ m a x - F test statistic and discuss its properties. We develop a Monte Carlo approach to obtain its sampling distribution under the null hypothesis. We evaluate the performance of the method through simulation studies. Specifically, for selected values of data size and (true) numbers of clusters, we obtain critical values of the $$max-F$$ m a x - F statistic, determine empirical Type I error rate of the proposed inferential procedure and study its power to reject the null hypothesis. Next, we show that the novel method is useful in a variety of applications by presenting two empirical case studies and end with some concluding remarks.


Horticulturae ◽  
2019 ◽  
Vol 5 (3) ◽  
pp. 57 ◽  
Author(s):  
Edward Durner

Most statistical techniques commonly used in horticultural research are parametric tests that are valid only for normal data with homogeneous variances. While parametric tests are robust when the data ‘slightly’ deviate from normality, a significant departure from normality leads to reduced power and the probability of a type I error increases. Transformations often used to normalize non-normal data can be time consuming, cumbersome and confusing and common non-parametric tests are not appropriate for evaluating interactive effects common in horticultural research. The aligned rank transformation allows non-parametric testing for interactions and main effects using standard ANOVA techniques. This has not been widely adapted due to its rigorous mathematical nature, however, a downloadable (ARTool) is now available, which performs the math needed for the transformation. This study provides step-by-step instructions for integrating ARTool with the free edition of SAS (SAS University Edition) in an easily employed method for testing normality, transforming data with aligned ranks, and analysing data using standard ANOVAs.


2010 ◽  
Vol 20 (6) ◽  
pp. 579-594 ◽  
Author(s):  
Nikki Fernandes ◽  
Andrew Stone

Clinical trials investigating the efficacy of two or more doses of an experimental treatment compared to a single reference arm are not uncommon. In such situations, if each dose is compared to the reference arm using an un-adjusted significance level, consideration of the Type I familywise error is likely to be required. Furthermore, in trials where two or more comparisons are performed using the same reference arm, the comparisons are inherently correlated. The correlation between comparisons can be utilised to remove some of the conservativeness of some commonly used procedures. This article is intended as a practical guide that should enable calculation of significance levels that fully conserve Type I error and provides graphical presentation that could facilitate their description to non-statisticians.


2018 ◽  
Author(s):  
Tamar Sofer ◽  
Xiuwen Zheng ◽  
Stephanie M. Gogarten ◽  
Cecelia A. Laurie ◽  
Kelsey Grinde ◽  
...  

AbstractWhen testing genotype-phenotype associations using linear regression, departure of the trait distribution from normality can impact both Type I error rate control and statistical power, with worse consequences for rarer variants. While it has been shown that applying a rank-normalization transformation to trait values before testing may improve these statistical properties, the factor driving them is not the trait distribution itself, but its residual distribution after regression on both covariates and genotype. Because genotype is expected to have a small effect (if any) investigators now routinely use a two-stage method, in which they first regress the trait on covariates, obtain residuals, rank-normalize them, and then secondly use the rank-normalized residuals in association analysis with the genotypes. Potential confounding signals are assumed to be removed at the first stage, so in practice no further adjustment is done in the second stage. Here, we show that this widely-used approach can lead to tests with undesirable statistical properties, due to both a combination of a mis-specified mean-variance relationship, and remaining covariate associations between the rank-normalized residuals and genotypes. We demonstrate these properties theoretically, and also in applications to genome-wide and whole-genome sequencing association studies. We further propose and evaluate an alternative fully-adjusted two-stage approach that adjusts for covariates both when residuals are obtained, and in the subsequent association test. This method can reduce excess Type I errors and improve statistical power.


2020 ◽  
Vol 7 (1) ◽  
pp. 1-6
Author(s):  
João Pedro Nunes ◽  
Giovanna F. Frigoli

The online support of IBM SPSS proposes that users alter the syntax when performing post-hoc analyses for interaction effects of ANOVA tests. Other authors also suggest altering the syntax when performing GEE analyses. This being done, the number of possible comparisons (k value) is also altered, therefore influencing the results from statistical tests that k is a component of the formula, such as repeated measures-ANOVA and Bonferroni post-hoc of ANOVA and GEE. This alteration also exacerbates type I error, producing erroneous results and conferring potential misinterpretations of data. Reasoning from this, the purpose of this paper is to report the misuse and improper handling of syntax for ANOVAs and GEE post-hoc analyses in SPSS and to illustrate its consequences on statistical results and data interpretation.


2003 ◽  
Vol 11 (3) ◽  
pp. 275-288 ◽  
Author(s):  
Scott W. Desposato

This article builds a nonparametric method for inference from roll-call cohesion scores. Cohesion scores have been a staple of legislative studies since the publication of Rice's 1924 thesis. Unfortunately, little effort has been dedicated to understanding their statistical properties or relating them to existing models of legislative behavior. I show how a common use of cohesion scores, testing for distinct voting blocs, is severely biased toward Type I error, practically guaranteeing significant findings even when the null hypothesis is correct. I offer a nonparametric method—permutation analysis—that solves the bias problem and provides for simple and intuitive inference. I demonstrate with an examination of roll-call voting data from the Brazilian National Congress.


2020 ◽  
Author(s):  
Pauline Manchon ◽  
Drifa Belhadi ◽  
France Mentré ◽  
Cédric Laouénan

Abstract Background Viral haemorrhagic fevers are characterized by irregular outbreaks with high mortality rate. Difficulties arise when implementing therapeutic trials in this context. The outbreak duration is hard to predict and can be short compared to delays of trial launch and number of subject needed (NSN) recruitment. Our objectives were to compare, using clinical trial simulation, different trial designs for experimental treatment evaluation in various outbreak scenarios. Methods Four type of designs were compared: fixed or group-sequential, each being single- or two-arm. The primary outcome was 14-day survival rate. For single-arm designs, results were compared to a pre-trial historical survival rate pH. Treatments efficacy was evaluated by one-sided tests of proportion (fixed designs) and Whitehead triangular tests (group-sequential designs) with type-I-error = 0.025. Both survival rates in the control arm pC and survival rate differences Δ (including 0) varied. Three specific cases were considered: “standard” (fixed pC, reaching NSN for fixed designs and maximum sample size NMax for group-sequential designs); “changing with time” (increased pC\(\text{ }\)over time); “stopping of recruitment” (epidemic ends). We calculated the proportion of simulated trials showing treatment efficacy, with K = 93,639 simulated trials to get a type-I-error PI95% of [0.024;0.026]. Results Under H0 (Δ = 0), for the “standard” case, the type-I-error was maintained regardless of trial designs. For “changing with time” case, when pC>pH, type-I-error was inflated, and when pC<pH it decreased. Wrong conclusions were more often observed for single-arm designs due to an increase of Δ over time. Under H1 (Δ=+0.2), for the “standard” case, the power was similar between single- and two-arm designs when pC=pH. For “stopping of recruitment” case, single-arm performed better than two-arm designs, and fixed designs reported higher power than group-sequential designs. A web R-Shiny application was developed. Conclusions At an outbreak beginning, group-sequential two-arm trials should be preferred, as the infected cases number increases allowing to conduct a strong randomized control trial. Group-sequential designs allow early termination of trials in cases of harmful experimental treatment. After the epidemic peak, fixed single-arm design should be preferred, as the cases number decreases but this assumes a high level of confidence on the pre-trial historical survival rate.


2020 ◽  
Vol 16 (11) ◽  
pp. e1008286
Author(s):  
Howard Bowman ◽  
Joseph L. Brooks ◽  
Omid Hajilou ◽  
Alexia Zoumpoulaki ◽  
Vladimir Litvak

There has been considerable debate and concern as to whether there is a replication crisis in the scientific literature. A likely cause of poor replication is the multiple comparisons problem. An important way in which this problem can manifest in the M/EEG context is through post hoc tailoring of analysis windows (a.k.a. regions-of-interest, ROIs) to landmarks in the collected data. Post hoc tailoring of ROIs is used because it allows researchers to adapt to inter-experiment variability and discover novel differences that fall outside of windows defined by prior precedent, thereby reducing Type II errors. However, this approach can dramatically inflate Type I error rates. One way to avoid this problem is to tailor windows according to a contrast that is orthogonal (strictly parametrically orthogonal) to the contrast being tested. A key approach of this kind is to identify windows on a fully flattened average. On the basis of simulations, this approach has been argued to be safe for post hoc tailoring of analysis windows under many conditions. Here, we present further simulations and mathematical proofs to show exactly why the Fully Flattened Average approach is unbiased, providing a formal grounding to the approach, clarifying the limits of its applicability and resolving published misconceptions about the method. We also provide a statistical power analysis, which shows that, in specific contexts, the fully flattened average approach provides higher statistical power than Fieldtrip cluster inference. This suggests that the Fully Flattened Average approach will enable researchers to identify more effects from their data without incurring an inflation of the false positive rate.


Sign in / Sign up

Export Citation Format

Share Document