scholarly journals Sample size considerations for split-mouth design

2015 ◽  
Vol 26 (6) ◽  
pp. 2543-2551 ◽  
Author(s):  
Hong Zhu ◽  
Song Zhang ◽  
Chul Ahn

Split-mouth designs are frequently used in dental clinical research, where a mouth is divided into two or more experimental segments that are randomly assigned to different treatments. It has the distinct advantage of removing a lot of inter-subject variability from the estimated treatment effect. Methods of statistical analyses for split-mouth design have been well developed. However, little work is available on sample size consideration at the design phase of a split-mouth trial, although many researchers pointed out that the split-mouth design can only be more efficient than a parallel-group design when within-subject correlation coefficient is substantial. In this paper, we propose to use the generalized estimating equation (GEE) approach to assess treatment effect in split-mouth trials, accounting for correlations among observations. Closed-form sample size formulas are introduced for the split-mouth design with continuous and binary outcomes, assuming exchangeable and “nested exchangeable” correlation structures for outcomes from the same subject. The statistical inference is based on the large sample approximation under the GEE approach. Simulation studies are conducted to investigate the finite-sample performance of the GEE sample size formulas. A dental clinical trial example is presented for illustration.

2021 ◽  
pp. 096228022110510
Author(s):  
Stefan Wellek

More often than not, clinical trials and even nonclinical medical experiments have to be run with observational units sampled from populations to be assumed heterogeneous with respect to covariates associated with the outcome. Relevant covariates which are known prior to randomization are usually categorical in type, and the corresponding subpopulations are called strata. In contrast to randomization which in most cases is performed in a way ensuring approximately constant sample size ratios across the strata, sample size planning is rarely done taking stratification into account. This holds true although the statistical literature provides a reasonably rich repertoire of testing procedures for stratified comparisons between two treatments in a parallel group design. For all of them, at least approximate methods of power calculation are available from which algorithms or even closed-form formulae for required sample sizes can be derived. The objective of this tutorial is to give a systematic review of the most frequently applicable of these methods and to compare them in terms of their efficiency under standard settings. Based on the results, recommendations for the sample size planning of stratified two-arm trials are given.


2009 ◽  
Vol 26 (3) ◽  
pp. 931-951 ◽  
Author(s):  
Yanqin Fan ◽  
Sang Soo Park

In this paper, we propose nonparametric estimators of sharp bounds on the distribution of treatment effects of a binary treatment and establish their asymptotic distributions. We note the possible failure of the standard bootstrap with the same sample size and apply the fewer-than-nbootstrap to making inferences on these bounds. The finite sample performances of the confidence intervals for the bounds based on normal critical values, the standard bootstrap, and the fewer-than-nbootstrap are investigated via a simulation study. Finally we establish sharp bounds on the treatment effect distribution when covariates are available.


Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


2017 ◽  
Vol 23 (5) ◽  
pp. 644-646 ◽  
Author(s):  
Maria Pia Sormani

The calculation of the sample size needed for a clinical study is the challenge most frequently put to statisticians, and it is one of the most relevant issues in the study design. The correct size of the study sample optimizes the number of patients needed to get the result, that is, to detect the minimum treatment effect that is clinically relevant. Minimizing the sample size of a study has the advantage of reducing costs, enhancing feasibility, and also has ethical implications. In this brief report, I will explore the main concepts on which the sample size calculation is based.


2005 ◽  
Vol 20 (2) ◽  
pp. 92-95 ◽  
Author(s):  
JR Calabrese ◽  
DJ Rapport ◽  
EA Youngstrom ◽  
K. Jackson ◽  
S. Bilali ◽  
...  

AbstractThe rapid cycling variant of bipolar disorder is defined as the occurrence of four periods of either manic or depressive illness within 12 months. Patients suffering from this variant of bipolar disorder have an unmet need for effective treatment. This review examines two major studies in an attempt to update understanding of the current therapies available to treat rapid cycling patients. The first trial compares lamotrigine versus placebo in 182 patients studied for 6 months. The second is a recently completed, 20-month trial comparing divalproate and lithium in 60 patients. Both trials had a double-blind, randomized parallel-group design. The data from the latter study indicate that there are no large differences in efficacy between lithium and divalproate in the long-term treatment of rapid cycling bipolar disorder. In addition, lamotrigine has the potential to complement the spectrum of lithium and divalproate through its greater efficacy for depressive symptoms.


1981 ◽  
Vol 9 (4) ◽  
pp. 288-291 ◽  
Author(s):  
Hans K Uhthoff ◽  
Jacques A Brunet ◽  
Anand Aggerwal ◽  
Raymond Varin

The efficacy of quazepam (Sch-16134) 15 mg capsules as a hypnotic has been compared with that of placebo in a 9-day study, using a parallel-group design. The physician's global evaluation numerically favoured quazepam 63% (nineteen of thirty) over placebo 50% (fifteen of thirty). Furthermore, it demonstrated greater improvement in Hypnotic Activity Index and Sleep Quality Index from baseline scores, and caused no adverse reactions.


2021 ◽  
pp. 174077452110101
Author(s):  
Jennifer Proper ◽  
John Connett ◽  
Thomas Murray

Background: Bayesian response-adaptive designs, which data adaptively alter the allocation ratio in favor of the better performing treatment, are often criticized for engendering a non-trivial probability of a subject imbalance in favor of the inferior treatment, inflating type I error rate, and increasing sample size requirements. The implementation of these designs using the Thompson sampling methods has generally assumed a simple beta-binomial probability model in the literature; however, the effect of these choices on the resulting design operating characteristics relative to other reasonable alternatives has not been fully examined. Motivated by the Advanced R2 Eperfusion STrategies for Refractory Cardiac Arrest trial, we posit that a logistic probability model coupled with an urn or permuted block randomization method will alleviate some of the practical limitations engendered by the conventional implementation of a two-arm Bayesian response-adaptive design with binary outcomes. In this article, we discuss up to what extent this solution works and when it does not. Methods: A computer simulation study was performed to evaluate the relative merits of a Bayesian response-adaptive design for the Advanced R2 Eperfusion STrategies for Refractory Cardiac Arrest trial using the Thompson sampling methods based on a logistic regression probability model coupled with either an urn or permuted block randomization method that limits deviations from the evolving target allocation ratio. The different implementations of the response-adaptive design were evaluated for type I error rate control across various null response rates and power, among other performance metrics. Results: The logistic regression probability model engenders smaller average sample sizes with similar power, better control over type I error rate, and more favorable treatment arm sample size distributions than the conventional beta-binomial probability model, and designs using the alternative randomization methods have a negligible chance of a sample size imbalance in the wrong direction. Conclusion: Pairing the logistic regression probability model with either of the alternative randomization methods results in a much improved response-adaptive design in regard to important operating characteristics, including type I error rate control and the risk of a sample size imbalance in favor of the inferior treatment.


2015 ◽  
Vol 26 (4) ◽  
pp. 1912-1924 ◽  
Author(s):  
Jeong Youn Lim ◽  
Jong-Hyeon Jeong

We propose a cause-specific quantile residual life regression where the cause-specific quantile residual life, defined as the inverse of the cumulative incidence function of the residual life distribution of a specific type of events of interest conditional on a fixed time point, is log-linear in observable covariates. The proposed test statistic for the effects of prognostic factors does not involve estimation of the improper probability density function of the cause-specific residual life distribution under competing risks. The asymptotic distribution of the test statistic is derived. Simulation studies are performed to assess the finite sample properties of the proposed estimating equation and the test statistic. The proposed method is illustrated with a real dataset from a clinical trial on breast cancer.


2018 ◽  
Vol 53 (7) ◽  
pp. 716-719
Author(s):  
Monica R. Lininger ◽  
Bryan L. Riemann

Objective: To describe the concept of statistical power as related to comparative interventions and how various factors, including sample size, affect statistical power.Background: Having a sufficiently sized sample for a study is necessary for an investigation to demonstrate that an effective treatment is statistically superior. Many researchers fail to conduct and report a priori sample-size estimates, which then makes it difficult to interpret nonsignificant results and causes the clinician to question the planning of the research design.Description: Statistical power is the probability of statistically detecting a treatment effect when one truly exists. The α level, a measure of differences between groups, the variability of the data, and the sample size all affect statistical power.Recommendations: Authors should conduct and provide the results of a priori sample-size estimations in the literature. This will assist clinicians in determining whether the lack of a statistically significant treatment effect is due to an underpowered study or to a treatment's actually having no effect.


Sign in / Sign up

Export Citation Format

Share Document