scholarly journals Testing for a Sweet Spot in Randomized Trials

2021 ◽  
pp. 0272989X2110255
Author(s):  
Donald A. Redelmeier ◽  
Deva Thiruchelvam ◽  
Robert J. Tibshirani

Introduction Randomized trials recruit diverse patients, including some individuals who may be unresponsive to the treatment. Here we follow up on prior conceptual advances and introduce a specific method that does not rely on stratification analysis and that tests whether patients in the intermediate range of disease severity experience more relative benefit than patients at the extremes of disease severity (sweet spot). Methods We contrast linear models to sigmoidal models when describing associations between disease severity and accumulating treatment benefit. The Gompertz curve is highlighted as a specific sigmoidal curve along with the Akaike information criterion (AIC) as a measure of goodness of fit. This approach is then applied to a matched analysis of a published landmark randomized trial evaluating whether implantable defibrillators reduce overall mortality in cardiac patients ( n = 2,521). Results The linear model suggested a significant survival advantage across the spectrum of increasing disease severity (β = 0.0847, P < 0.001, AIC = 2,491). Similarly, the sigmoidal model suggested a significant survival advantage across the spectrum of disease severity (α = 93, β = 4.939, γ = 0.00316, P < 0.001 for all, AIC = 1,660). The discrepancy between the 2 models indicated worse goodness of fit with a linear model compared to a sigmoidal model (AIC: 2,491 v. 1,660, P < 0.001), thereby suggesting a sweet spot in the midrange of disease severity. Model cross-validation using computational statistics also confirmed the superior goodness of fit of the sigmoidal curve with a concentration of survival benefits for patients in the midrange of disease severity. Conclusion Systematic methods are available beyond simple stratification for identifying a sweet spot according to disease severity. The approach can assess whether some patients experience more relative benefit than other patients in a randomized trial. [Box: see text]

1986 ◽  
Vol 25 (04) ◽  
pp. 237-241 ◽  
Author(s):  
D. Commenges

SummaryIn a randomized clinical trial, the design may or may not be stratified, the analysis may or may not be adjusted. The cross-classification of these alternatives leads to four different strategies. These strategies, plus another one, are evaluated within the framework of a linear model. A discussion about the conditional and unconditional points of view throws some light on the problem of bias in a randomized trial.


2021 ◽  
Vol 12 (3) ◽  
pp. 102
Author(s):  
Jaouad Khalfi ◽  
Najib Boumaaz ◽  
Abdallah Soulmani ◽  
El Mehdi Laadissi

The Box–Jenkins model is a polynomial model that uses transfer functions to express relationships between input, output, and noise for a given system. In this article, we present a Box–Jenkins linear model for a lithium-ion battery cell for use in electric vehicles. The model parameter identifications are based on automotive drive-cycle measurements. The proposed model prediction performance is evaluated using the goodness-of-fit criteria and the mean squared error between the Box–Jenkins model and the measured battery cell output. A simulation confirmed that the proposed Box–Jenkins model could adequately capture the battery cell dynamics for different automotive drive cycles and reasonably predict the actual battery cell output. The goodness-of-fit value shows that the Box–Jenkins model matches the battery cell data by 86.85% in the identification phase, and 90.83% in the validation phase for the LA-92 driving cycle. This work demonstrates the potential of using a simple and linear model to predict the battery cell behavior based on a complex identification dataset that represents the actual use of the battery cell in an electric vehicle.


Author(s):  
Jens Wermers ◽  
Benedikt Schliemann ◽  
Michael J. Raschke ◽  
Philipp A. Michel ◽  
Lukas F. Heilmann ◽  
...  

Abstract Purpose Surgical treatment of shoulder instability caused by anterior glenoid bone loss is based on a critical threshold of the defect size. Recent studies indicate that the glenoid concavity is essential for glenohumeral stability. However, biomechanical proof of this principle is lacking. The aim of this study was to evaluate whether glenoid concavity allows a more precise assessment of glenohumeral stability than the defect size alone. Methods The stability ratio (SR) is a biomechanical estimate of glenohumeral stability. It is defined as the maximum dislocating force the joint can resist related to a medial compression force. This ratio was determined for 17 human cadaveric glenoids in a robotic test setup depending on osteochondral concavity and anterior defect size. Bony defects were created gradually, and a 3D measuring arm was used for morphometric measurements. The influence of defect size and concavity on the SR was examined using linear models. In addition, the morphometrical-based bony shoulder stability ratio (BSSR) was evaluated to prove its suitability for estimation of glenohumeral stability independent of defect size. Results Glenoid concavity is a significant predictor for the SR, while the defect size provides minor informative value. The linear model featured a high goodness of fit with a determination coefficient of R2 = 0.98, indicating that 98% of the SR is predictable by concavity and defect size. The low mean squared error (MSE) of 4.2% proved a precise estimation of the SR. Defect size as an exclusive predictor in the linear model reduced R2 to 0.9 and increased the MSE to 25.7%. Furthermore, the loss of SR with increasing defect size was shown to be significantly dependent on the initial concavity. The BSSR as a single predictor for glenohumeral stability led to highest precision with MSE = 3.4%. Conclusion Glenoid concavity is a crucial factor for the SR. Independent of the defect size, the computable BSSR is a precise biomechanical estimate of the measured SR. The inclusion of glenoid concavity has the potential to influence clinical decision-making for an improved and personalised treatment of glenohumeral instability with anterior glenoid bone loss.


2016 ◽  
Vol 41 (4) ◽  
pp. 357-388 ◽  
Author(s):  
Elizabeth A. Stuart ◽  
Anna Rhodes

Background: Given increasing concerns about the relevance of research to policy and practice, there is growing interest in assessing and enhancing the external validity of randomized trials: determining how useful a given randomized trial is for informing a policy question for a specific target population. Objectives: This article highlights recent advances in assessing and enhancing external validity, with a focus on the data needed to make ex post statistical adjustments to enhance the applicability of experimental findings to populations potentially different from their study sample. Research design: We use a case study to illustrate how to generalize treatment effect estimates from a randomized trial sample to a target population, in particular comparing the sample of children in a randomized trial of a supplemental program for Head Start centers (the Research-Based, Developmentally Informed study) to the national population of children eligible for Head Start, as represented in the Head Start Impact Study. Results: For this case study, common data elements between the trial sample and population were limited, making reliable generalization from the trial sample to the population challenging. Conclusions: To answer important questions about external validity, more publicly available data are needed. In addition, future studies should make an effort to collect measures similar to those in other data sets. Measure comparability between population data sets and randomized trials that use samples of convenience will greatly enhance the range of research and policy relevant questions that can be answered.


2001 ◽  
Vol 19 (2) ◽  
pp. 305-313 ◽  
Author(s):  
Susan G. Urba ◽  
Mark B. Orringer ◽  
Andrew Turrisi ◽  
Mark Iannettoni ◽  
Arlene Forastiere ◽  
...  

PURPOSE: A pilot study of 43 patients with potentially resectable esophageal carcinoma treated with an intensive regimen of preoperative chemoradiation with cisplatin, fluorouracil, and vinblastine before surgery showed a median survival of 29 months in comparison with the 12-month median survival of 100 historical controls treated with surgery alone at the same institution. We designed a randomized trial to compare survival for patients treated with this preoperative chemoradiation regimen versus surgery alone.MATERIALS AND METHODS: One hundred patients with esophageal carcinoma were randomized to receive either surgery alone (arm I) or preoperative chemoradiation (arm II) with cisplatin 20 mg/m2/d on days 1 through 5 and 17 through 21, fluorouracil 300 mg/m2/d on days 1 through 21, and vinblastine 1 mg/m2/d on days 1 through 4 and 17 through 20. Radiotherapy consisted of 1.5-Gy fractions twice daily, Monday through Friday over 21 days, to a total dose of 45 Gy. Transhiatal esophagectomy with a cervical esophagogastric anastomosis was performed on approximately day 42.RESULTS: At median follow-up of 8.2 years, there is no significant difference in survival between the treatment arms. Median survival is 17.6 months in arm I and 16.9 months in arm II. Survival at 3 years was 16% in arm I and 30% in arm II (P = .15). This study was statistically powered to detect a relatively large increase in median survival from 1 year to 2.2 years, with at least 80% power.CONCLUSION: This randomized trial of preoperative chemoradiation versus surgery alone for patients with potentially resectable esophageal carcinoma did not demonstrate a statistically significant survival difference.


2017 ◽  
Vol 26 (4) ◽  
pp. 1572-1589 ◽  
Author(s):  
Timothy NeCamp ◽  
Amy Kilbourne ◽  
Daniel Almirall

Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.


Author(s):  
Yanhui Wang ◽  
Yuewen Jiang ◽  
Duoduo Yin ◽  
Chenxia Liang ◽  
Fuzhou Duan

AbstractThe examination of poverty-causing factors and their mechanisms of action in poverty-stricken villages is an important topic associated with poverty reduction issues. Although the individual or background effects of multilevel influencing factors have been considered in some previous studies, the spatial effects of these factors are rarely involved. By considering nested geographic and administrative features and integrating the detection of individual, background, and spatial effects, a bilevel hierarchical spatial linear model (HSLM) is established in this study to identify the multilevel significant factors that cause poverty in poor villages, as well as the mechanisms through which these factors contribute to poverty at both the village and county levels. An experimental test in the region of the Wuling Mountains in central China revealed the following findings. (1) There were significant background and spatial effects in the study area. Moreover, 48.28% of the overall difference in poverty incidence in poor villages resulted from individual effects at the village level. Additionally, 51.72% of the overall difference resulted from background effects at the county level. (2) Poverty-causing factors were observed at different levels, and these factors featured different action mechanisms. Village-level factors accounted for 14.29% of the overall difference in poverty incidence, and there were five significant village-level factors. (3) The hierarchical spatial regression model was found to be superior to the hierarchical linear model in terms of goodness of fit. This study offers technical support and policy guidance for village-level regional development.


OENO One ◽  
2017 ◽  
Vol 51 (4) ◽  
pp. 401-407 ◽  
Author(s):  
Daniel Molitor ◽  
Lucien Hoffmann ◽  
Marco Beyer

Aims: The present analyses aimed at evaluating the performance of two models for estimating the overall effect of combining two or more measures (leaf removal, cluster division, late shoot topping, botryticide application, bioregulator application) for controlling grape bunch rot based on the efficacy of the individual measures.Methods and results: Field trials with the white Vitis vinifera cultivars Pinot gris and Riesling on the efficacy of three bunch rot control measures applied either alone or in combination were analyzed. Bunch rot disease severities prior to harvest were assessed and efficacies were calculated for each treatment. Observed efficacies of single measures were used to estimate the overall efficacies of all possible measure combinations. Calculated efficacies matched observed efficacies more accurately when assuming multiplicative interaction among the individual measures (R2 = 0.8574, p < 0.0001; average absolute deviation: 7.9%) than in case of assuming additive effects (R2 = 0.8280; average absolute deviation: 14.7%).Conclusions: The multiplicative approach assumes that each additional measure is affecting (in case of efficient measures: reducing) the disease severity level as the result of the additional treatments rather than compared to the disease severity level in the untreated control.Significance and impact of the study: The high goodness of fit as well as the observed low deviations between the estimated and the observed efficacies suggest that the multiplicative approach is appropriate for estimating the efficacy of combined viticultural measures in a complex practical bunch rot control strategy assembled of different modules.


2017 ◽  
Vol 6 (3) ◽  
pp. 75
Author(s):  
Tiago V. F. Santana ◽  
Edwin M. M. Ortega ◽  
Gauss M. Cordeiro ◽  
Adriano K. Suzuki

A new regression model based on the exponentiated Weibull with the structure distribution and the structure of the generalized linear model, called the generalized exponentiated Weibull linear model (GEWLM), is proposed. The GEWLM is composed by three important structural parts: the random component, characterized by the distribution of the response variable; the systematic component, which includes the explanatory variables in the model by means of a linear structure; and a link function, which connects the systematic and random parts of the model. Explicit expressions for the logarithm of the likelihood function, score vector and observed and expected information matrices are presented. The method of maximum likelihood and a Bayesian procedure are adopted for estimating the model parameters. To detect influential observations in the new model, we use diagnostic measures based on the local influence and Bayesian case influence diagnostics. Also, we show that the estimates of the GEWLM are  robust to deal with the presence of outliers in the data. Additionally, to check whether the model supports its assumptions, to detect atypical observations and to verify the goodness-of-fit of the regression model, we define residuals based on the quantile function, and perform a Monte Carlo simulation study to construct confidence bands from the generated envelopes. We apply the new model to a dataset from the insurance area.


Sign in / Sign up

Export Citation Format

Share Document