A SAS macro for sample size adjustment and randomization test for internal pilot study

2008 ◽  
Vol 90 (1) ◽  
pp. 66-88 ◽  
Author(s):  
Suzhen Wang ◽  
Jielai Xia ◽  
Lili Yu ◽  
Chanjuan Li ◽  
Li Xu
2018 ◽  
Vol 28 (6) ◽  
pp. 1852-1878
Author(s):  
Maria M Ciarleglio ◽  
Christopher D Arendt

When designing studies involving a continuous endpoint, the hypothesized difference in means ([Formula: see text]) and the assumed variability of the endpoint ([Formula: see text]) play an important role in sample size and power calculations. Traditional methods of sample size re-estimation often update one or both of these parameters using statistics observed from an internal pilot study. However, the uncertainty in these estimates is rarely addressed. We propose a hybrid classical and Bayesian method to formally integrate prior beliefs about the study parameters and the results observed from an internal pilot study into the sample size re-estimation of a two-stage study design. The proposed method is based on a measure of power called conditional expected power (CEP), which averages the traditional power curve using the prior distributions of θ and [Formula: see text] as the averaging weight, conditional on the presence of a positive treatment effect. The proposed sample size re-estimation procedure finds the second stage per-group sample size necessary to achieve the desired level of conditional expected interim power, an updated CEP calculation that conditions on the observed first-stage results. The CEP re-estimation method retains the assumption that the parameters are not known with certainty at an interim point in the trial. Notional scenarios are evaluated to compare the behavior of the proposed method of sample size re-estimation to three traditional methods.


2017 ◽  
Vol 27 (11) ◽  
pp. 3286-3303 ◽  
Author(s):  
Marius Placzek ◽  
Tim Friede

The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.


2018 ◽  
Vol 7 (6) ◽  
pp. 81
Author(s):  
Fang Fang ◽  
Yong Lin ◽  
Weichung Joe Shih ◽  
Shou-En Lu ◽  
Guangrui Zhu

The accuracy of the treatment effect estimation is crucial to the success of Phase 3 studies. The calculation of sample size relies on the treatment effect estimation and cannot be changed during the trial in a fixed sample size design. Oftentimes, with limited efficacy data available from early phase studies and relevant historical studies, the sample size calculation may not accurately reflect the true treatment effect. Several adaptive designs have been proposed to address this uncertainty in the sample size calculation. These adaptive designs provide flexibility of sample size adjustment during the trial by allowing early trial stopping or sample size adjustment at interim look(s). The use of adaptive designs can optimize the trial performance when the treatment effect is an assumed constant value. However in practice, it may be more reasonable to consider the treatment effect within an interval rather than as a point estimate. Because proper selection of adaptive designs may decrease the failure rate of Phase 3 clinical trials and increase the chance for new drug approval, this paper proposes measures and evaluates the performance of different adaptive designs based on treatment effect intervals, and identifies factors that may affect the performance of adaptive designs.


Sign in / Sign up

Export Citation Format

Share Document