scholarly journals Simulation and minimization: Technical advances for factorial experiments designed to optimize clinical interventions

2019 ◽  
Author(s):  
Jocelyn Lara Kuhn ◽  
Radley Christopher Sheldrick ◽  
Sarabeth Broder-Fingert ◽  
Andrea Chu ◽  
Lisa Fortuna ◽  
...  

Abstract Background The Multiphase Optimization Strategy (MOST) is designed to maximize the impact of clinical healthcare interventions, which are typically multicomponent and increasingly complex. The MOST framework often relies on factorial experiments to identify which components of an intervention are most effective, efficient, and scalable. When assigning participants to conditions in factorial experiments, researchers must be careful to select the assignment method that will result in balanced sample sizes and equivalence of covariates across conditions without being predictable. Historically, procedures most often include simple randomization and stratification with blocking; minimization is an increasingly utilized method that assigns participants to the condition that minimizes differences in covariates and sample size across study conditions. Methods In the context of a MOST optimization trial with a 2x2x2x2 factorial design (4 components, 16 cells), we used computer simulation to empirically test three subject assignment methods: simple randomization, stratification with blocking, and minimization. We compared these methods with respect to: sample size balance across condition, equivalence across conditions on key covariates, and unpredictability of assignments. Leveraging an existing dataset to compare three different allocation methods, we conducted 250 computerized simulations using bootstrap samples of 304 participants, which was the planned sample size for the proposed study. Result Simple randomization, the most unpredictable subject allocation method, generated the least balance of sample and equivalence of covariates across the 16 study cells. Stratification with blocking performed well on stratified variables, and resulted in similar sample balance and predictability as minimization. In contrast, minimization, which had a higher degree of complexity and cost, was most successful in achieving balanced sample sizes and equivalence across a large number of covariates. Conclusions Unlike simple randomization, minimization procedures and stratification with blocking are both methodologically sound options for factorial designs. Based on the computer simulation results and priorities within context of this MOST optimization trial, minimization was selected as the optimal subject allocation method. Minimization is utilized infrequently in randomized experiments but represents an important technical advance that should be considered by researchers implementing multi-arm and factorial studies.

2019 ◽  
Author(s):  
Jocelyn Lara Kuhn ◽  
Radley Christopher Sheldrick ◽  
Sarabeth Broder-Fingert ◽  
Andrea Chu ◽  
Lisa Fortuna ◽  
...  

Abstract Background: The Multiphase Optimization Strategy (MOST) is designed to maximize the impact of clinical healthcare interventions, which are typically multicomponent and increasingly complex. MOST often relies on factorial experiments to identify which components of an intervention are most effective, efficient, and scalable. When assigning participants to conditions in factorial experiments, researchers must be careful to select the assignment procedure that will result in balanced sample sizes and equivalence of covariates across conditions while maintaining unpredictability. Methods: In the context of a MOST optimization trial with a 2x2x2x2 factorial design, we used computer simulation to empirically test five subject allocation procedures: simple randomization, stratified randomization with permuted blocks, maximum tolerated imbalance (MTI), minimal sufficient balance (MSB), and minimization. We compared these methods across the 16 study cells with respect to sample size balance, equivalence on key covariates, and unpredictability. Leveraging an existing dataset to compare these procedures, we conducted 250 computerized simulations using bootstrap samples of 304 participants. Results: Simple randomization, the most unpredictable procedure, generated poor sample balance and equivalence of covariates across the 16 study cells. Stratified randomization with permuted blocks performed well on stratified variables but resulted in poor equivalence on other covariates and poor balance. MTI, MSB, and minimization had higher complexity and cost. MTI resulted in balance close to pre-specified thresholds and a higher degree of unpredictability, but poor equivalence of covariates. MSB had 19.7% deterministic allocations, poor sample balance and improved equivalence on only a few covariates. Minimization was most successful in achieving balanced sample sizes and equivalence across a large number of covariates, but resulted in 34% deterministic allocations. Small differences in proportion of correct guesses were found across the procedures. Conclusions: Computer simulation was highly useful for evaluating tradeoffs among randomization procedures. Based on the computer simulation results and priorities within the study context, minimization with a random element was selected for the planned research study. Minimization with a random element, as well as computer simulation to make an informed randomization procedure choice, are utilized infrequently in randomized experiments but represent important technical advances that researchers implementing multi-arm and factorial studies should consider.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Jocelyn Kuhn ◽  
Radley Christopher Sheldrick ◽  
Sarabeth Broder-Fingert ◽  
Andrea Chu ◽  
Lisa Fortuna ◽  
...  

Abstract Background The Multiphase Optimization Strategy (MOST) is designed to maximize the impact of clinical healthcare interventions, which are typically multicomponent and increasingly complex. MOST often relies on factorial experiments to identify which components of an intervention are most effective, efficient, and scalable. When assigning participants to conditions in factorial experiments, researchers must be careful to select the assignment procedure that will result in balanced sample sizes and equivalence of covariates across conditions while maintaining unpredictability. Methods In the context of a MOST optimization trial with a 2x2x2x2 factorial design, we used computer simulation to empirically test five subject allocation procedures: simple randomization, stratified randomization with permuted blocks, maximum tolerated imbalance (MTI), minimal sufficient balance (MSB), and minimization. We compared these methods across the 16 study cells with respect to sample size balance, equivalence on key covariates, and unpredictability. Leveraging an existing dataset to compare these procedures, we conducted 250 computerized simulations using bootstrap samples of 304 participants. Results Simple randomization, the most unpredictable procedure, generated poor sample balance and equivalence of covariates across the 16 study cells. Stratified randomization with permuted blocks performed well on stratified variables but resulted in poor equivalence on other covariates and poor balance. MTI, MSB, and minimization had higher complexity and cost. MTI resulted in balance close to pre-specified thresholds and a higher degree of unpredictability, but poor equivalence of covariates. MSB had 19.7% deterministic allocations, poor sample balance and improved equivalence on only a few covariates. Minimization was most successful in achieving balanced sample sizes and equivalence across a large number of covariates, but resulted in 34% deterministic allocations. Small differences in proportion of correct guesses were found across the procedures. Conclusions Based on the computer simulation results and priorities within the study context, minimization with a random element was selected for the planned research study. Minimization with a random element, as well as computer simulation to make an informed randomization procedure choice, are utilized infrequently in randomized experiments but represent important technical advances that researchers implementing multi-arm and factorial studies should consider.


2019 ◽  
Author(s):  
Jocelyn Lara Kuhn ◽  
Radley Christopher Sheldrick ◽  
Sarabeth Broder-Fingert ◽  
Andrea Chu ◽  
Lisa Fortuna ◽  
...  

Abstract Background: The Multiphase Optimization Strategy (MOST) is designed to maximize the impact of clinical healthcare interventions, which are typically multicomponent and increasingly complex. MOST often relies on factorial experiments to identify which components of an intervention are most effective, efficient, and scalable. When assigning participants to conditions in factorial experiments, researchers must be careful to select the assignment procedure that will result in balanced sample sizes and equivalence of covariates across conditions while maintaining unpredictability. Methods: In the context of a MOST optimization trial with a 2x2x2x2 factorial design, we used computer simulation to empirically test five subject allocation procedures: simple randomization, stratified randomization with permuted blocks, maximum tolerated imbalance (MTI), minimal sufficient balance (MSB), and minimization. We compared these methods across the 16 study cells with respect to sample size balance, equivalence on key covariates, and unpredictability. Leveraging an existing dataset to compare these procedures, we conducted 250 computerized simulations using bootstrap samples of 304 participants. Results: Simple randomization, the most unpredictable procedure, generated poor sample balance and equivalence of covariates across the 16 study cells. Stratified randomization with permuted blocks performed well on stratified variables but resulted in poor equivalence on other covariates and poor balance. MTI, MSB, and minimization had higher complexity and cost. MTI resulted in balance close to pre-specified thresholds and a higher degree of unpredictability, but poor equivalence of covariates. MSB had 19.7% deterministic allocations, poor sample balance and improved equivalence on only a few covariates. Minimization was most successful in achieving balanced sample sizes and equivalence across a large number of covariates, but resulted in 34% deterministic allocations. Small differences in proportion of correct guesses were found across the procedures. Conclusions: Based on the computer simulation results and priorities within the study context, minimization with a random element was selected for the planned research study. Minimization with a random element, as well as computer simulation to make an informed randomization procedure choice, are utilized infrequently in randomized experiments but represent important technical advances that researchers implementing multi-arm and factorial studies should consider.


2019 ◽  
Author(s):  
Pengchao Ye ◽  
Wenbin Ye ◽  
Congting Ye ◽  
Shuchao Li ◽  
Lishan Ye ◽  
...  

Abstract Motivation Single-cell RNA-sequencing (scRNA-seq) is fast and becoming a powerful technique for studying dynamic gene regulation at unprecedented resolution. However, scRNA-seq data suffer from problems of extremely high dropout rate and cell-to-cell variability, demanding new methods to recover gene expression loss. Despite the availability of various dropout imputation approaches for scRNA-seq, most studies focus on data with a medium or large number of cells, while few studies have explicitly investigated the differential performance across different sample sizes or the applicability of the approach on small or imbalanced data. It is imperative to develop new imputation approaches with higher generalizability for data with various sample sizes. Results We proposed a method called scHinter for imputing dropout events for scRNA-seq with special emphasis on data with limited sample size. scHinter incorporates a voting-based ensemble distance and leverages the synthetic minority oversampling technique for random interpolation. A hierarchical framework is also embedded in scHinter to increase the reliability of the imputation for small samples. We demonstrated the ability of scHinter to recover gene expression measurements across a wide spectrum of scRNA-seq datasets with varied sample sizes. We comprehensively examined the impact of sample size and cluster number on imputation. Comprehensive evaluation of scHinter across diverse scRNA-seq datasets with imbalanced or limited sample size showed that scHinter achieved higher and more robust performance than competing approaches, including MAGIC, scImpute, SAVER and netSmooth. Availability and implementation Freely available for download at https://github.com/BMILAB/scHinter. Supplementary information Supplementary data are available at Bioinformatics online.


2015 ◽  
Vol 27 (1) ◽  
pp. 114-125 ◽  
Author(s):  
BC Tai ◽  
ZJ Chen ◽  
D Machin

In designing randomised clinical trials involving competing risks endpoints, it is important to consider competing events to ensure appropriate determination of sample size. We conduct a simulation study to compare sample sizes obtained from the cause-specific hazard and cumulative incidence (CMI) approaches, by first assuming exponential event times. As the proportional subdistribution hazard assumption does not hold for the CMI exponential (CMIExponential) model, we further investigate the impact of violation of such an assumption by comparing the results obtained from the CMI exponential model with those of a CMI model assuming a Gompertz distribution (CMIGompertz) where the proportional assumption is tenable. The simulation suggests that the CMIExponential approach requires a considerably larger sample size when treatment reduces the hazards of both the main event, A, and the competing risk, B. When treatment has a beneficial effect on A but no effect on B, the sample sizes required by both methods are largely similar, especially for large reduction in the main risk. If treatment has a protective effect on A but adversely affects B, then the sample size required by CMIExponential is notably smaller than cause-specific hazard for small to moderate reduction in the main risk. Further, a smaller sample size is required for CMIGompertz as compared with CMIExponential. The choice between a cause-specific hazard or CMI model in competing risks outcomes has implications on the study design. This should be made on the basis of the clinical question of interest and the validity of the associated model assumption.


2017 ◽  
Author(s):  
Xiao Chen ◽  
Bin Lu ◽  
Chao-Gan Yan

ABSTRACTConcerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability / replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 (40 per group)) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect “true” effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility.


2017 ◽  
Vol 11 (22) ◽  
Author(s):  
Juan Rositas Martínez

Keywords: confidence intervals, Cronbach's alpha, effect size, factor analysis, hypothesis testing, sample size, structural equation modelingAbstract. The purpose of this paper is to contribute to fulfilling the objectives of social sciences research such as proper estimation, explanation, prediction and control of levels of social reality variables and their interrelationships, especially when dealing with quantitative variables. It was shown that the sample size or the number of observations to be collected and analyzed is transcendental for the adequacy of the method of statistical inference selected and for the impact degree achieved in its results, especially for complying with reports guidelines issued by the American Psychological Association. Methods and formulations were investigated to determine the sample sizes that contribute to have good levels of estimation when establishing confidence intervals, with reasonable wide and relevant and significative magnitudes of the effects. Practical rules suggested by several researchers when determining samples sizes were tested and as a result it was integrated a guide for determining sample sizes for dichotomous, continuous, discrete and Likert variables, correlation and regression methods, factor analysis, Cronbach's alpha, and structural equation models. It is recommended that the reader builds scenarios with this guide and be aware of the implications and relevance in scientific research and decision making of the sample sizes in trying to meet the aforementioned objectives.Palabras clave: análisis factorial, intervalo de confianza, alpha de Cronbach, modelación mediante ecuaciones estructurales, pruebas de hipótesis, tamaño de muestra, tamaño del efectoResumen. El propósito del presente documento es contribuir al cumplimiento de los objetivos de la investigación en las ciencias sociales de estimar, explicar, predecir y controlar niveles de variables de la realidad social y sus interrelaciones, en investigaciones de tipo cuantitativo. Se demostró que el tamaño de la muestra o la cantidad de observaciones que hay que recolectar y analizar es trascendente tanto en la pertinencia del método de inferencia estadístico que se utilice como en el grado de impacto que se logre en sus resultados, sobre todo de cara a cumplir con lineamientos emitidos por la Asociación Americana de Psicología que es la que da la pauta en la mayoría de las publicaciones del área social. Se investigaron métodos y formulaciones para determinar los tamaños de muestra que contribuyan a tener buenos niveles de estimación al momento de establecer los intervalos de confianza, con aperturas razonables y con magnitudes de los efectos que sean de impacto y se pusieron a prueba reglas prácticas sugeridas por varios autores lográndose integrar una guía tanto para variables dicotómicas, continuas, discretas, tipo Likert y para interrelaciones en ellas, ya se trate de análisis factorial, alpha de Cronbach, regresiones o ecuaciones estructurales. Se recomienda que el lector crear escenarios con esta guía y se sensibilice y se convenza de las implicaciones y de trascendencia tanto en la investigación científica como en la toma de decisiones de los tamaños de muestra al tratar de cumplir con los objetivos de la que hemos mencionado.


2022 ◽  
Author(s):  
Megan MacPherson ◽  
Kohle Merry ◽  
Sean Locke ◽  
Mary Jung

UNSTRUCTURED With thousands of mHealth solutions on the market, patients and healthcare providers struggle to identify which solution to use/prescribe. The lack of evidence-based mHealth solutions may be due to limited research on intervention development and continued use of traditional research methods for mHealth evaluation. The Multiphase Optimization Strategy (MOST) is a framework which aids in developing interventions which are economical, affordable, scalable, and effective (EASE). MOST Phase I highlights the importance of formative intervention development, a stage often overlooked and rarely published. The aim of MOST Phase I is to identify candidate intervention components, create a conceptual model, and define the optimization objective. While MOST sets these three targets, the framework itself does not provide robust guidance on how to conduct quality research within Phase I, and what steps can be taken to identify potential intervention components, develop the conceptual model, and achieve intervention EASE with the implementation context in mind. To advance the applicability of MOST within the field of implementation science, this paper provides an account of the methods used to develop an mHealth intervention. Specifically, we provide a comprehensive example of how to achieve the goals of MOST Phase I by outlining the formative development of an mHealth prompting intervention within a diabetes prevention program. Additionally, recommendations are proposed for future researchers to conduct formative research on mHealth interventions with implementation in mind. Given its considerable reach, mHealth has the potential to positively impact public health by decreasing implementation costs and improving accessibility. MOST is well-suited for the efficient development and optimization of mHealth interventions. By using an implementation-focused lens and outlining the steps in developing an mHealth intervention using MOST Phase I, this work can may guide future intervention developers towards maximizing the impact of mHealth outside of the research laboratory.


2019 ◽  
Author(s):  
Sarabeth Broder-Fingert ◽  
Jocelyn Lara Kuhn ◽  
Radley Christopher Sheldrick ◽  
Andrea Chu ◽  
Lisa Fortuna ◽  
...  

Abstract Background Delivery of behavioral interventions is complex, as the majority of interventions consist of multiple components used either simultaneously, sequentially, or both. The importance of clearly delineating delivery strategies within these complex interventions - and furthermore understanding the impact of each strategy on effectiveness - has recently emerged as an important facet of intervention research. Yet, few methodologies exist to prospectively test the effectiveness of delivery strategies and how they impact implementation. In the current paper, we describe a study protocol for a large randomized controlled trial in which we will use the Multiphase Optimization Strategy (MOST) – a novel framework developed to optimize interventions - to test the effectiveness of intervention delivery strategies using a factorial design. We apply this framework to delivery of Family Navigation (FN), an evidence-based care management strategy designed to reduce disparities and improve access to behavioral health services, and test four components related to its implementation. Methods/Design The MOST framework contains three distinct phases: Preparation, Optimization, and Evaluation. The preparation phase for this study occurred previously. The current study consists of the optimization and evaluation phases. Children ages three-to-twelve years-old who are detected as “at-risk” for behavioral health disorders (n=304) at a large, urban federally qualified community health center will be referred to a Family Partner – a bi-cultural, bi-lingual member of the community with training in behavioral health and systems navigation – who will perform FN. Families will then be randomized to one of 16 possible combinations of FN delivery strategies (2x2x2x2 factorial design). The primary outcome measure will be achieving a family-centered goal related to behavioral health services within 90 days of randomization. Implementation data on fidelity, acceptability, feasibility, and cost of each strategy will also be collected. Results from the primary and secondary outcomes will be reviewed by our team of stakeholders to optimize FN delivery for implementation and dissemination based on effectiveness, efficiency, and cost. Discussion In this protocol paper, we describe how the MOST Framework can be used to improve intervention delivery. These methods will be useful for future studies testing intervention delivery strategies and their impact on implementation.


Trials ◽  
2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Sarabeth Broder-Fingert ◽  
Jocelyn Kuhn ◽  
Radley Christopher Sheldrick ◽  
Andrea Chu ◽  
Lisa Fortuna ◽  
...  

Abstract Background Delivery of behavioral interventions is complex, as the majority of interventions consist of multiple components used either simultaneously, sequentially, or both. The importance of clearly delineating delivery strategies within these complex interventions—and furthermore understanding the impact of each strategy on effectiveness—has recently emerged as an important facet of intervention research. Yet, few methodologies exist to prospectively test the effectiveness of delivery strategies and how they impact implementation. In the current paper, we describe a study protocol for a large randomized controlled trial in which we will use the Multiphase Optimization Strategy (MOST), a novel framework developed to optimize interventions, i.e., to test the effectiveness of intervention delivery strategies using a factorial design. We apply this framework to delivery of Family Navigation (FN), an evidence-based care management strategy designed to reduce disparities and improve access to behavioral health services, and test four components related to its implementation. Methods/design The MOST framework contains three distinct phases: Preparation, Optimization, and Evaluation. The Preparation phase for this study occurred previously. The current study consists of the Optimization and Evaluation phases. Children aged 3-to-12 years old who are detected as “at-risk” for behavioral health disorders (n = 304) at a large, urban federally qualified community health center will be referred to a Family Partner—a bicultural, bilingual member of the community with training in behavioral health and systems navigation—who will perform FN. Families will then be randomized to one of 16 possible combinations of FN delivery strategies (2 × 2 × 2× 2 factorial design). The primary outcome measure will be achieving a family-centered goal related to behavioral health services within 90 days of randomization. Implementation data on the fidelity, acceptability, feasibility, and cost of each strategy will also be collected. Results from the primary and secondary outcomes will be reviewed by our team of stakeholders to optimize FN delivery for implementation and dissemination based on effectiveness, efficiency, and cost. Discussion In this protocol paper, we describe how the MOST framework can be used to improve intervention delivery. These methods will be useful for future studies testing intervention delivery strategies and their impact on implementation. Trial registration ClinicalTrials.gov, NCT03569449. Registered on 26 June 2018.


Sign in / Sign up

Export Citation Format

Share Document