scholarly journals Crossover design

2019 ◽  
Vol 7 (30) ◽  
pp. 63-66
Author(s):  
Shengpin Yang ◽  
Gilbert Berdine

I am planning a clinical trial to compare two diets on reducing the risk of type II diabetes.Because there is a restriction on the total budget, I would prefer to enroll a small number ofparticipants. Meanwhile, it is important that there is sufficient statistical power to detect aclinically meaningful difference. Is there any study design that can be utilized?

2019 ◽  
Vol 2 (3) ◽  
pp. 199-213 ◽  
Author(s):  
Marc-André Goulet ◽  
Denis Cousineau

When running statistical tests, researchers can commit a Type II error, that is, fail to reject the null hypothesis when it is false. To diminish the probability of committing a Type II error (β), statistical power must be augmented. Typically, this is done by increasing sample size, as more participants provide more power. When the estimated effect size is small, however, the sample size required to achieve sufficient statistical power can be prohibitive. To alleviate this lack of power, a common practice is to measure participants multiple times under the same condition. Here, we show how to estimate statistical power by taking into account the benefit of such replicated measures. To that end, two additional parameters are required: the correlation between the multiple measures within a given condition and the number of times the measure is replicated. An analysis of a sample of 15 studies (total of 298 participants and 38,404 measurements) suggests that in simple cognitive tasks, the correlation between multiple measures is approximately .14. Although multiple measurements increase statistical power, this effect is not linear, but reaches a plateau past 20 to 50 replications (depending on the correlation). Hence, multiple measurements do not replace the added population representativeness provided by additional participants.


2019 ◽  
Vol 50 (5-6) ◽  
pp. 292-304 ◽  
Author(s):  
Mario Wenzel ◽  
Marina Lind ◽  
Zarah Rowland ◽  
Daniela Zahn ◽  
Thomas Kubiak

Abstract. Evidence on the existence of the ego depletion phenomena as well as the size of the effects and potential moderators and mediators are ambiguous. Building on a crossover design that enables superior statistical power within a single study, we investigated the robustness of the ego depletion effect between and within subjects and moderating and mediating influences of the ego depletion manipulation checks. Our results, based on a sample of 187 participants, demonstrated that (a) the between- and within-subject ego depletion effects only had negligible effect sizes and that there was (b) large interindividual variability that (c) could not be explained by differences in ego depletion manipulation checks. We discuss the implications of these results and outline a future research agenda.


2019 ◽  
Author(s):  
Curtis David Von Gunten ◽  
Bruce D Bartholow

A primary psychometric concern with laboratory-based inhibition tasks has been their reliability. However, a reliable measure may not be necessary or sufficient for reliably detecting effects (statistical power). The current study used a bootstrap sampling approach to systematically examine how the number of participants, the number of trials, the magnitude of an effect, and study design (between- vs. within-subject) jointly contribute to power in five commonly used inhibition tasks. The results demonstrate the shortcomings of relying solely on measurement reliability when determining the number of trials to use in an inhibition task: high internal reliability can be accompanied with low power and low reliability can be accompanied with high power. For instance, adding additional trials once sufficient reliability has been reached can result in large gains in power. The dissociation between reliability and power was particularly apparent in between-subject designs where the number of participants contributed greatly to power but little to reliability, and where the number of trials contributed greatly to reliability but only modestly (depending on the task) to power. For between-subject designs, the probability of detecting small-to-medium-sized effects with 150 participants (total) was generally less than 55%. However, effect size was positively associated with number of trials. Thus, researchers have some control over effect size and this needs to be considered when conducting power analyses using analytic methods that take such effect sizes as an argument. Results are discussed in the context of recent claims regarding the role of inhibition tasks in experimental and individual difference designs.


2021 ◽  
Vol 103 ◽  
pp. 106321
Author(s):  
Susmita Kashikar-Zuck ◽  
Matthew S. Briggs ◽  
Sharon Bout-Tabaku ◽  
Mark Connelly ◽  
Morgan Daffin ◽  
...  

1990 ◽  
Vol 47 (1) ◽  
pp. 2-15 ◽  
Author(s):  
Randall M. Peterman

Ninety-eight percent of recently surveyed papers in fisheries and aquatic sciences that did not reject some null hypothesis (H0) failed to report β, the probability of making a type II error (not rejecting H0 when it should have been), or statistical power (1 – β). However, 52% of those papers drew conclusions as if H0 were true. A false H0 could have been missed because of a low-power experiment, caused by small sample size or large sampling variability. Costs of type II errors can be large (for example, for cases that fail to detect harmful effects of some industrial effluent or a significant effect of fishing on stock depletion). Past statistical power analyses show that abundance estimation techniques usually have high β and that only large effects are detectable. I review relationships among β, power, detectable effect size, sample size, and sampling variability. I show how statistical power analysis can help interpret past results and improve designs of future experiments, impact assessments, and management regulations. I make recommendations for researchers and decision makers, including routine application of power analysis, more cautious management, and reversal of the burden of proof to put it on industry, not management agencies.


2011 ◽  
Vol 38 (10) ◽  
pp. 2095-2104 ◽  
Author(s):  
JACOB KARSH ◽  
EDWARD C. KEYSTONE ◽  
BOULOS HARAOUI ◽  
J. CARTER THORNE ◽  
JANET E. POPE ◽  
...  

Objective.Current clinical trial designs for pharmacologic interventions in rheumatoid arthritis (RA) do not reflect the innovations in RA diagnosis, treatment, and care in countries where new drugs are most often used. The objective of this project was to recommend revised entry criteria and other study design features for RA clinical trials.Methods.Recommendations were developed using a modified nominal group consensus method. Canadian Rheumatology Research Consortium (CRRC) members were polled to rank the greatest challenges to clinical trial recruitment in their practices. Initial recommendations were developed by an expert panel of rheumatology trialists and other experts. A scoping study methodology was then used to examine the evidence available to support or refute each initial recommendation. The potential influence of CRRC recommendations on primary outcomes in future trials was examined. Recommendations were finalized using a consensus process.Results.Recommendations for clinical trial inclusion criteria addressed measures of disease activity [Disease Activity Score 28 using erythrocyte sedimentation rate (DAS28-ESR) > 3.2 PLUS ≥ 3 tender joints using 28-joint count (TJC28) PLUS ≥ 3 swollen joint (SJC28) OR C-reactive protein (CRP) or ESR > upper limit of normal PLUS ≥ 3 TJC28 PLUS ≥ 3 SJC28], functional classification, disease classification and duration, and concomitant RA treatments. Additional recommendations regarding study design addressed rescue strategies and longterm extension.Conclusion.There is an urgent need to modify clinical trial inclusion criteria and other study design features to better reflect the current characteristics of people living with RA in the countries where the new drugs will be used.


Sign in / Sign up

Export Citation Format

Share Document