Sample size re-estimation in a superiority clinical trial using a hybrid classical and Bayesian procedure

2018 ◽  
Vol 28 (6) ◽  
pp. 1852-1878
Author(s):  
Maria M Ciarleglio ◽  
Christopher D Arendt

When designing studies involving a continuous endpoint, the hypothesized difference in means ([Formula: see text]) and the assumed variability of the endpoint ([Formula: see text]) play an important role in sample size and power calculations. Traditional methods of sample size re-estimation often update one or both of these parameters using statistics observed from an internal pilot study. However, the uncertainty in these estimates is rarely addressed. We propose a hybrid classical and Bayesian method to formally integrate prior beliefs about the study parameters and the results observed from an internal pilot study into the sample size re-estimation of a two-stage study design. The proposed method is based on a measure of power called conditional expected power (CEP), which averages the traditional power curve using the prior distributions of θ and [Formula: see text] as the averaging weight, conditional on the presence of a positive treatment effect. The proposed sample size re-estimation procedure finds the second stage per-group sample size necessary to achieve the desired level of conditional expected interim power, an updated CEP calculation that conditions on the observed first-stage results. The CEP re-estimation method retains the assumption that the parameters are not known with certainty at an interim point in the trial. Notional scenarios are evaluated to compare the behavior of the proposed method of sample size re-estimation to three traditional methods.

2017 ◽  
Vol 27 (11) ◽  
pp. 3286-3303 ◽  
Author(s):  
Marius Placzek ◽  
Tim Friede

The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.


2009 ◽  
Vol 42 (2) ◽  
pp. 262-271
Author(s):  
Suzhen Wang ◽  
Jielai Xia ◽  
Lili Yu ◽  
Chanjuan Li ◽  
Li Xu ◽  
...  

2017 ◽  
Author(s):  
Angela M Rodrigues ◽  
Falko F Sniehotta ◽  
Mark A Birch-Machin ◽  
Patrick Olivier ◽  
Vera Ara�jo-Soares

BACKGROUND Recreational sun exposure has been associated with melanoma prevalence, and tourism settings are of particular interest for skin cancer prevention. Effective, affordable, and geographically flexible interventions to promote sun protection are needed. OBJECTIVE The aim of this study was to describe the protocol for a definitive randomized controlled trial (RCT) evaluating a smartphone mobile intervention (mISkin app) promoting sun protection in holidaymakers and to assess the acceptability and feasibility of the mISkin app and associated trial procedures in an internal pilot study. METHODS Participants were recruited from the general community. Holidaymakers traveling abroad and owning a smartphone were enrolled in the internal pilot of a 2 (mISkin vs control) x 2 (sun protection factor [SPF] 15 vs SPF 30) RCT with a postholiday follow-up. The smartphone app is fully automated and entails a behavioral intervention to promote sun protection. It consisted of five components: skin assessment, educational videos, ultraviolet (UV) photos, gamification, and prompts for sun protection. Participants were also randomly allocated to receive sunscreen SPF 15 or SPF 30. Primary outcomes for the internal pilot study were acceptability and feasibility of trial procedures and intervention features. Secondary outcomes were collected at baseline and after holidays through face-to-face-assessments and included skin sun damage, sunscreen use (residual weight and application events), and sun protection practices (Web-based questionnaire). RESULTS From 142 registers of interest, 42 participants were randomized (76% [32/42] female; mean age 35.5 years). Outcome assessments were completed by all participants. Random allocation to SPF 15 versus SPF 30 was found not to be feasible in a definitive trial protocol. Of the 21 people allocated to the mISkin intervention, 19 (91%) installed the mISkin on their phones, and 18 (86%) used it at least once. Participants were satisfied with the mISkin app and made suggestions for further improvements. Due to difficulties with the random allocation to SPF and slow uptake, the trial was discontinued. CONCLUSIONS The internal pilot study concluded that randomization to SPF was not feasible and that recruitment rate was slower than expected because of difficulties with gatekeeper engagement. Possible solutions to the problems identified are discussed. Further refinements to the mISkin app are needed before a definitive trial. CLINICALTRIAL International Standard Randomized Controlled Trial Number ISRCTN63943558; http://www.isrctn.com/ISRCTN63943558 (Archived by WebCite at http://www.webcitation.org/6xOLvbab8)


Author(s):  
Dean Karlan ◽  
Jacob Appel

This chapter focuses on low participation rates. Low participation rates squeeze the effective sample size for a test, making it more difficult, statistically, to identify a positive treatment effect. There are two moments in which low participation rates can materialize: during the intake process to a study or intervention, or after random assignment to treatment or control. Low participation during the intake process often occurs when marketing a program to the general public. Researchers working in the field with partner organizations often face inflexible constraints in trying to cope with low participation during intake. The second type of low participation—that which occurs after subjects have been randomly assigned to treatment or control—is a more daunting problem and is less likely solvable than low participation at the intake phase.


2013 ◽  
Vol 55 (4) ◽  
pp. 617-633 ◽  
Author(s):  
Simon Schneider ◽  
Heinz Schmidli ◽  
Tim Friede

Sign in / Sign up

Export Citation Format

Share Document