scholarly journals A Bayesian basket trial design using a calibrated Bayesian hierarchical model

2018 ◽  
Vol 15 (2) ◽  
pp. 149-158 ◽  
Author(s):  
Yiyi Chu ◽  
Ying Yuan

Background: The basket trial evaluates the treatment effect of a targeted therapy in patients with the same genetic or molecular aberration, regardless of their cancer types. Bayesian hierarchical modeling has been proposed to adaptively borrow information across cancer types to improve the statistical power of basket trials. Although conceptually attractive, research has shown that Bayesian hierarchical models cannot appropriately determine the degree of information borrowing and may lead to substantially inflated type I error rates. Methods: We propose a novel calibrated Bayesian hierarchical model approach to evaluate the treatment effect in basket trials. In our approach, the shrinkage parameter that controls information borrowing is not regarded as an unknown parameter. Instead, it is defined as a function of a similarity measure of the treatment effect across tumor subgroups. The key is that the function is calibrated using simulation such that information is strongly borrowed across subgroups if their treatment effects are similar and barely borrowed if the treatment effects are heterogeneous. Results: The simulation study shows that our method has substantially better controlled type I error rates than the Bayesian hierarchical model. In some scenarios, for example, when the true response rate is between the null and alternative, the type I error rate of the proposed method can be inflated from 10% up to 20%, but is still better than that of the Bayesian hierarchical model. Limitation: The proposed design assumes a binary endpoint. Extension of the proposed design to ordinal and time-to-event endpoints is worthy of further investigation. Conclusion: The calibrated Bayesian hierarchical model provides a practical approach to design basket trials with more flexibility and better controlled type I error rates than the Bayesian hierarchical model. The software for implementing the proposed design is available at http://odin.mdacc.tmc.edu/~yyuan/index_code.html

2019 ◽  
Vol 14 (2) ◽  
pp. 399-425 ◽  
Author(s):  
Haolun Shi ◽  
Guosheng Yin

2014 ◽  
Vol 38 (2) ◽  
pp. 109-112 ◽  
Author(s):  
Daniel Furtado Ferreira

Sisvar is a statistical analysis system with a large usage by the scientific community to produce statistical analyses and to produce scientific results and conclusions. The large use of the statistical procedures of Sisvar by the scientific community is due to it being accurate, precise, simple and robust. With many options of analysis, Sisvar has a not so largely used analysis that is the multiple comparison procedures using bootstrap approaches. This paper aims to review this subject and to show some advantages of using Sisvar to perform such analysis to compare treatments means. Tests like Dunnett, Tukey, Student-Newman-Keuls and Scott-Knott are performed alternatively by bootstrap methods and show greater power and better controls of experimentwise type I error rates under non-normal, asymmetric, platykurtic or leptokurtic distributions.


2021 ◽  
Author(s):  
Megha Joshi ◽  
James E Pustejovsky ◽  
S. Natasha Beretvas

The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some meta-analyses also include multiple studies conducted by the same lab or investigator, creating another potential source of dependence. An increasingly popular method to handle dependence is robust variance estimation (RVE), but this method can result in inflated Type I error rates when the number of studies is small. Small-sample correction methods for RVE have been shown to control Type I error rates adequately but may be overly conservative, especially for tests of multiple-contrast hypotheses. We evaluated an alternative method for handling dependence, cluster wild bootstrapping, which has been examined in the econometrics literature but not in the context of meta-analysis. Results from two simulation studies indicate that cluster wild bootstrapping maintains adequate Type I error rates and provides more power than extant small sample correction methods, particularly for multiple-contrast hypothesis tests. We recommend using cluster wild bootstrapping to conduct hypothesis tests for meta-analyses with a small number of studies. We have also created an R package that implements such tests.


2020 ◽  
Author(s):  
Jeff Miller

Contrary to the warning of Miller (1988), Rousselet and Wilcox (2020) argued that it is better to summarize each participant’s single-trial reaction times (RTs) in a given condition with the median than with the mean when comparing the central tendencies of RT distributions across experimental conditions. They acknowledged that median RTs can produce inflated Type I error rates when conditions differ in the number of trials tested, consistent with Miller’s warning, but they showed that the bias responsible for this error rate inflation could be eliminated with a bootstrap bias correction technique. The present simulations extend their analysis by examining the power of bias-corrected medians to detect true experimental effects and by comparing this power with the power of analyses using means and regular medians. Unfortunately, although bias-corrected medians solve the problem of inflated Type I error rates, their power is lower than that of means or regular medians in many realistic situations. In addition, even when conditions do not differ in the number of trials tested, the power of tests (e.g., t-tests) is generally lower using medians rather than means as the summary measures. Thus, the present simulations demonstrate that summary means will often provide the most powerful test for differences between conditions, and they show what aspects of the RT distributions determine the size of the power advantage for means.


Sign in / Sign up

Export Citation Format

Share Document