Clinical trials with nested subgroups: Analysis, sample size determination and internal pilot studies

2017 ◽  
Vol 27 (11) ◽  
pp. 3286-3303 ◽  
Author(s):  
Marius Placzek ◽  
Tim Friede

The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.

2021 ◽  
Author(s):  
Metin Bulus

A recent systematic review of experimental studies conducted in Turkey between 2010 and 2020 reported that small sample sizes had been a significant drawback (Bulus and Koyuncu, 2021). A small chunk of the studies were small-scale true experiments (subjects randomized into the treatment and control groups). The remaining studies consisted of quasi-experiments (subjects in treatment and control groups were matched on pretest or other covariates) and weak experiments (neither randomized nor matched but had the control group). They had an average sample size below 70 for different domains and outcomes. These small sample sizes imply a strong (and perhaps erroneous) assumption about the minimum relevant effect size (MRES) of intervention before an experiment is conducted; that is, a standardized intervention effect of Cohen’s d < 0.50 is not relevant to education policy or practice. Thus, an introduction to sample size determination for pretest-posttest simple experimental designs is warranted. This study describes nuts and bolts of sample size determination, derives expressions for optimal design under differential cost per treatment and control units, provide convenient tables to guide sample size decisions for MRES values between 0.20 ≤ Cohen’s d ≤ 0.50, and describe the relevant software along with illustrations.


2020 ◽  
pp. 096228022097579
Author(s):  
Duncan T Wilson ◽  
Richard Hooper ◽  
Julia Brown ◽  
Amanda J Farrin ◽  
Rebecca EA Walwyn

Simulation offers a simple and flexible way to estimate the power of a clinical trial when analytic formulae are not available. The computational burden of using simulation has, however, restricted its application to only the simplest of sample size determination problems, often minimising a single parameter (the overall sample size) subject to power being above a target level. We describe a general framework for solving simulation-based sample size determination problems with several design parameters over which to optimise and several conflicting criteria to be minimised. The method is based on an established global optimisation algorithm widely used in the design and analysis of computer experiments, using a non-parametric regression model as an approximation of the true underlying power function. The method is flexible, can be used for almost any problem for which power can be estimated using simulation, and can be implemented using existing statistical software packages. We illustrate its application to a sample size determination problem involving complex clustering structures, two primary endpoints and small sample considerations.


2018 ◽  
Vol 28 (7) ◽  
pp. 2179-2195 ◽  
Author(s):  
Chieh Chiang ◽  
Chin-Fu Hsiao

Multiregional clinical trials have been accepted in recent years as a useful means of accelerating the development of new drugs and abridging their approval time. The statistical properties of multiregional clinical trials are being widely discussed. In practice, variance of a continuous response may be different from region to region, but it leads to the assessment of the efficacy response falling into a Behrens–Fisher problem—there is no exact testing or interval estimator for mean difference with unequal variances. As a solution, this study applies interval estimations of the efficacy response based on Howe’s, Cochran–Cox’s, and Satterthwaite’s approximations, which have been shown to have well-controlled type I error rates. However, the traditional sample size determination cannot be applied to the interval estimators. The sample size determination to achieve a desired power based on these interval estimators is then presented. Moreover, the consistency criteria suggested by the Japanese Ministry of Health, Labour and Welfare guidance to decide whether the overall results from the multiregional clinical trial obtained via the proposed interval estimation were also applied. A real example is used to illustrate the proposed method. The results of simulation studies indicate that the proposed method can correctly determine the required sample size and evaluate the assurance probability of the consistency criteria.


2018 ◽  
Vol 28 (6) ◽  
pp. 1852-1878
Author(s):  
Maria M Ciarleglio ◽  
Christopher D Arendt

When designing studies involving a continuous endpoint, the hypothesized difference in means ([Formula: see text]) and the assumed variability of the endpoint ([Formula: see text]) play an important role in sample size and power calculations. Traditional methods of sample size re-estimation often update one or both of these parameters using statistics observed from an internal pilot study. However, the uncertainty in these estimates is rarely addressed. We propose a hybrid classical and Bayesian method to formally integrate prior beliefs about the study parameters and the results observed from an internal pilot study into the sample size re-estimation of a two-stage study design. The proposed method is based on a measure of power called conditional expected power (CEP), which averages the traditional power curve using the prior distributions of θ and [Formula: see text] as the averaging weight, conditional on the presence of a positive treatment effect. The proposed sample size re-estimation procedure finds the second stage per-group sample size necessary to achieve the desired level of conditional expected interim power, an updated CEP calculation that conditions on the observed first-stage results. The CEP re-estimation method retains the assumption that the parameters are not known with certainty at an interim point in the trial. Notional scenarios are evaluated to compare the behavior of the proposed method of sample size re-estimation to three traditional methods.


1987 ◽  
Vol 24 (3) ◽  
pp. 319-321 ◽  
Author(s):  
Ronald E. Shiffler ◽  
Arthur J. Adams

When a pilot study variance is used to estimate σ2 in the sample size formula, the resulting [Formula: see text] is a random variable. The authors investigate the theoretical behavior of [Formula: see text]. Though [Formula: see text] is more likely to underachieve than overachieve the unbiased n, correction factors to balance the bias are provided.


Sign in / Sign up

Export Citation Format

Share Document