A new design for phase II single-arm clinical trials: Bayesian predictive sample size selection design.

2013 ◽  
Vol 31 (15_suppl) ◽  
pp. 6576-6576
Author(s):  
Satoshi Teramukai ◽  
Takashi Daimon ◽  
Sarah Zohar

6576 Background: The aim of phase II trials is to determine if a new treatment is promising for further testing in confirmatory clinical trials. Most phase II clinical trials are designed as single-arm trials using a binary outcome with or without interim monitoring for early stopping. In this context, we propose a Bayesian adaptive design denoted as PSSD, predictive sample size selection design (Statistics in Medicine 2012;31:4243-4254). Methods: The design allows for sample size selection followed by any planned interim analyses for early stopping of a trial, together with sample size determination before starting the trial. In the PSSD, we determined the sample size using the predictive probability criterion with two kinds of prior distributions, that is, an ‘analysis prior’ used to compute posterior probabilities and a ‘design prior’ used to obtain prior predictive distributions. In the sample size determination, we provide two sample sizes, that is, N and Nmax, using two types of design priors. At each interim analysis, we calculate the predictive probability of achieving a successful result at the end of the trial using analysis prior in order to stop the trial in case of low or high efficacy, and we select an optimal sample size, that is, either N or Nmax as needed, on the basis of the predictive probabilities. Results: We investigated the operating characteristics through simulation studies, and the PSSD retrospectively applies to a lung cancer clinical trial. As the number of interim looks increases, the probability of type I errors slightly decreases, and that of type II errors increases. The type I error probabilities of the probabilities of the proposed PSSD are almost similar to those of the non-adaptive design. The type II error probabilities in the PSSD are between those of the two fixed sample size (N or Nmax) designs. Conclusions: From a practical standpoint, the proposed design could be useful in phase II single-arm clinical trials with a binary endpoint. In the near future, this approach will be implemented in actual clinical trials to assess its usefulness and to extend it to more complicated clinical trials.

2018 ◽  
Vol 28 (7) ◽  
pp. 2179-2195 ◽  
Author(s):  
Chieh Chiang ◽  
Chin-Fu Hsiao

Multiregional clinical trials have been accepted in recent years as a useful means of accelerating the development of new drugs and abridging their approval time. The statistical properties of multiregional clinical trials are being widely discussed. In practice, variance of a continuous response may be different from region to region, but it leads to the assessment of the efficacy response falling into a Behrens–Fisher problem—there is no exact testing or interval estimator for mean difference with unequal variances. As a solution, this study applies interval estimations of the efficacy response based on Howe’s, Cochran–Cox’s, and Satterthwaite’s approximations, which have been shown to have well-controlled type I error rates. However, the traditional sample size determination cannot be applied to the interval estimators. The sample size determination to achieve a desired power based on these interval estimators is then presented. Moreover, the consistency criteria suggested by the Japanese Ministry of Health, Labour and Welfare guidance to decide whether the overall results from the multiregional clinical trial obtained via the proposed interval estimation were also applied. A real example is used to illustrate the proposed method. The results of simulation studies indicate that the proposed method can correctly determine the required sample size and evaluate the assurance probability of the consistency criteria.


2021 ◽  
Vol 58 (2) ◽  
pp. 133-147
Author(s):  
Rownak Jahan Tamanna ◽  
M. Iftakhar Alam ◽  
Ahmed Hossain ◽  
Md Hasinur Rahaman Khan

Summary Sample size calculation is an integral part of any clinical trial design, and determining the optimal sample size for a study ensures adequate power to detect statistical significance. It is a critical step in designing a planned research protocol, since using too many participants in a study is expensive, exposing more subjects to the procedure. If a study is underpowered, it will be statistically inconclusive and may cause the whole protocol to fail. Amidst the attempt to maximize power and the underlying effort to minimize the budget, the optimization of both has become a significant issue in the determination of sample size for clinical trials in recent decades. Although it is hard to generalize a single method for sample size calculation, this study is an attempt to offer something that might be a basis for finding a permanent answer to the contradictions of sample size determination, by the use of simulation studies under simple random and cluster sampling schemes, with different sizes of power and type I error. The effective sample size is much higher when the design effect of the sampling method is smaller, particularly less than 1. Sample size increases for cluster sampling when the number of clusters increases.


2019 ◽  
Author(s):  
Fayette Klaassen ◽  
Herbert Hoijtink ◽  
Xin Gu

Researchers can express expectations regarding the ordering of group means in simple order constrained hypotheses, for example $H_i: \mu_1>\mu_2>\mu_3$, $H_c: \text{ not } H_i$, and $H_{i'}:\mu_3>\mu_2>\mu_1$. They can compare these hypotheses by means of a Bayes factor, the relative evidence for two hypotheses. The required sample size for a hypothesis test can depend on the desired level of unconditional error probabilities (Type I and Type II error probabilities), or the conditional error probabilities (the level of evidence). This article presents four approaches for sample size determination, that make use of both conditional and unconditional error probabilities. Simulations were performed to determine the sample size such that error probabilities are acceptably low or expected evidence is acceptably strong. The required sample size is lower if $H_{i}$ is evaluated against $H_{i'}$ than when it is evaluated against $H_c$. Thus, specifying what orderings of means are expected or are of interest decreases the required sample size. Second, the required sample sizes differ over the four approaches. The sample size tables are illustrated with example research questions. The choice for an approach is, among others, dependent on the type of conclusion a researcher wants to obtain. A decision tree is provided to guide researchers to the appropriate approach. Applied researchers can use the decision tree and the tables presented to determine the required sample size for their research or use R code and associated manual provided in this paper.


2019 ◽  
pp. 1-13
Author(s):  
Xiaoyu Cai ◽  
Yi Tsong ◽  
Meiyu Shen

Adaptive sample size re-estimation (SSR) methods have been widely used for designing clinical trials, especially during the past two decades. We give a critical review for several commonly used two-stage adaptive SSR designs for superiority trials with continuous endpoints. The objective, design and some of our suggestions and concerns of each design will be discussed in this paper. Keywords: Adaptive Design; Sample Size Re-estimation; Review Introduction Sample size determination is a key part of designing clinical trials. The objective of a good clinical trial design is to achieve the balance between efficiently spending resources and enrolling enough patients to achieve a desired power. At the designing stage of a clinical trial, there usually only have limited information available about the population, so that the sample size calculated at this stage may not be sufficient to address the study objective. Assumed that the data from two parallel treatment groups (e.g. treatment and control groups) are normally distributed with mean treatment effect μ_1 and μ_2, and equal within-group variance 𝜎2. Let the mean difference (treatment effect) . The efficacy of the treatment will be evaluated by testing the hypothesis.


Sign in / Sign up

Export Citation Format

Share Document