Center-Specific Modeling Predicts Cancer Trial Accrual More Accurately Than Investigators and Random Effects Modeling at 16 Cancer Centers

2019 ◽  
pp. 1-12
Author(s):  
Wendy R. Tate ◽  
Ivo Abraham ◽  
Lee D. Cranmer

PURPOSE Clinical trials often exceed their anticipated enrollment periods, and study sites often do not meet accrual goals. We previously reported the development and validation of a single-site accrual prediction model. Here, we describe the expansion of this methodology at 16 cancer centers (CCs) and compare an overall model versus site-specific models. METHODS This retrospective cohort study used data from treatment and supportive care intervention studies permanently closed to accrual between 2009 and 2015 at 16 United States–based CCs. Center and ClinicalTrials.gov data were used to generate both site-specific and random effects mixed models (random effect: institution). Accrual predictions were generated from each model and compared with the accrual prediction of the disease team (DT). RESULTS Sixteen institutions submitted 5,787 eligible trials (range, 93 to 697 trials per institution). Local accrual ranged from 363 to 6,716 participants; 1,053 studies (18%) accrued no participants. Actual average accrual was 8.5 participants (median, four participants). Site-specific models predicted accrual at 99% of actual and correctly predicted whether a study would accrue four or more participants 73% of the time versus DT prediction of 58%. Correlation at the category level was 30%; model sensitivity and specificity were 83% and 62%, respectively. The overall model predicted accrual 93% of actual and correctly predicted accrual of four or more participants 66% of the time, with a correlation at the category level of 28%. CONCLUSION Both regression models predicted clinical trial accrual at least as or more accurately than DT at all but one center. Site-specific models generally performed slightly better than the random effects model. This study confirms the previous finding that this method is an accurate and objective metric that can be easily implemented to improve clinical research resource allocation across multiple centers.

2012 ◽  
Vol 69 (11) ◽  
pp. 1881-1893 ◽  
Author(s):  
Verena M. Trenkel ◽  
Mark V. Bravington ◽  
Pascal Lorance

Catch curves are widely used to estimate total mortality for exploited marine populations. The usual population dynamics model assumes constant recruitment across years and constant total mortality. We extend this to include annual recruitment and annual total mortality. Recruitment is treated as an uncorrelated random effect, while total mortality is modelled by a random walk. Data requirements are minimal as only proportions-at-age and total catches are needed. We obtain the effective sample size for aggregated proportion-at-age data based on fitting Dirichlet-multinomial distributions to the raw sampling data. Parameter estimation is carried out by approximate likelihood. We use simulations to study parameter estimability and estimation bias of four model versions, including models treating mortality as fixed effects and misspecified models. All model versions were, in general, estimable, though for certain parameter values or replicate runs they were not. Relative estimation bias of final year total mortalities and depletion rates were lower for the proposed random effects model compared with the fixed effects version for total mortality. The model is demonstrated for the case of blue ling (Molva dypterygia) to the west of the British Isles for the period 1988 to 2011.


QJM ◽  
2021 ◽  
Author(s):  
Marco Zuin ◽  
Gianluca Rigatelli ◽  
Claudio Bilato ◽  
Carlo Cervellati ◽  
Giovanni Zuliani ◽  
...  

Abstract Objective The prevalence and prognostic implications of pre-existing dyslipidaemia in patients infected by the SARS-CoV-2 remain unclear. To perform a systematic review and meta-analysis of prevalence and mortality risk in COVID-19 patients with pre-existing dyslipidaemia. Methods Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in abstracting data and assessing validity. We searched MEDLINE and Scopus to locate all the articles published up to January 31, 2021, reporting data on dyslipidaemia among COVID-19 survivors and non-survivors. The pooled prevalence of dyslipidaemia was calculated using a random effects model and presenting the related 95% confidence interval (CI), while the mortality risk was estimated using the Mantel-Haenszel random effects models with odds ratio (OR) and related 95% CI. Statistical heterogeneity was measured using the Higgins I2 statistic. Results Eighteen studies, enrolling 74.132 COVID-19 patients [mean age 70.6 years], met the inclusion criteria and were included in the final analysis. The pooled prevalence of dyslipidaemia was 17.5% of cases (95% CI: 12.3-24.3%, p < 0.0001), with high heterogeneity (I2=98.7%). Pre-existing dyslipidaemia was significantly associated with higher risk of short-term death (OR: 1.69, 95% CI: 1.19-2.41, p = 0.003), with high heterogeneity (I2=88.7%). Due to publication bias, according to the Trim-and-Fill method, the corrected random-effect ORs resulted 1.61, 95% CI 1.13-2.28, p < 0.0001 (one studies trimmed). Conclusions Dyslipidaemia represents a major comorbidity in about 18% of COVID-19 patients but it is associated with a 60% increase of short-term mortality risk.


2020 ◽  
pp. 1-37
Author(s):  
Tal Yarkoni

Abstract Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned—that is, that the two must refer to roughly the same set of hypothetical observations. Here I argue that many applications of statistical inference in psychology fail to meet this basic condition. Focusing on the most widely used class of model in psychology—the linear mixed model—I explore the consequences of failing to statistically operationalize verbal hypotheses in a way that respects researchers' actual generalization intentions. I demonstrate that whereas the "random effect" formalism is used pervasively in psychology to model inter-subject variability, few researchers accord the same treatment to other variables they clearly intend to generalize over (e.g., stimuli, tasks, or research sites). The under-specification of random effects imposes far stronger constraints on the generalizability of results than most researchers appreciate. Ignoring these constraints can dramatically inflate false positive rates, and often leads researchers to draw sweeping verbal generalizations that lack a meaningful connection to the statistical quantities they are putatively based on. I argue that failure to take the alignment between verbal and statistical expressions seriously lies at the heart of many of psychology's ongoing problems (e.g., the replication crisis), and conclude with a discussion of several potential avenues for improvement.


2020 ◽  
pp. 1471082X2096691
Author(s):  
Amani Almohaimeed ◽  
Jochen Einbeck

Random effect models have been popularly used as a mainstream statistical technique over several decades; and the same can be said for response transformation models such as the Box–Cox transformation. The latter aims at ensuring that the assumptions of normality and of homoscedasticity of the response distribution are fulfilled, which are essential conditions for inference based on a linear model or a linear mixed model. However, methodology for response transformation and simultaneous inclusion of random effects has been developed and implemented only scarcely, and is so far restricted to Gaussian random effects. We develop such methodology, thereby not requiring parametric assumptions on the distribution of the random effects. This is achieved by extending the ‘Nonparametric Maximum Likelihood’ towards a ‘Nonparametric profile maximum likelihood’ technique, allowing to deal with overdispersion as well as two-level data scenarios.


Author(s):  
Madeleine Moyle ◽  
John F. Boyle

AbstractAn existing steady state model of lake phosphorus (P) budgets has been adapted to allow reconstruction of long-term average historic lake water total phosphorus (TP) concentrations using lake sediment records of P burial. This model can be applied without site-specific parameterisation, thus potentially having universal application. In principle, it is applicable at any site where there is both a sediment P burial record and knowledge of the current water budget, although we advise caution applying it to problematic sediment records. Tested at six published case study sites, modelled lake water TP concentrations agree well with water-quality monitoring data, and limited testing finds good agreement with wholly independent diatom inferred lake water TP. Our findings, together with a review of the literature, suggest that well preserved lake sediments can usefully record a long-term average P burial rate from which the long-term mean lake water TP can be reliably estimated. These lake water TP reconstructions can provide meaningful site-specific reference values to support decision making in lake eutrophication management, including establishing targets for lake restoration.


2018 ◽  
Vol 147 ◽  
Author(s):  
A. Aswi ◽  
S. M. Cramb ◽  
P. Moraga ◽  
K. Mengersen

AbstractDengue fever (DF) is one of the world's most disabling mosquito-borne diseases, with a variety of approaches available to model its spatial and temporal dynamics. This paper aims to identify and compare the different spatial and spatio-temporal Bayesian modelling methods that have been applied to DF and examine influential covariates that have been reportedly associated with the risk of DF. A systematic search was performed in December 2017, using Web of Science, Scopus, ScienceDirect, PubMed, ProQuest and Medline (via Ebscohost) electronic databases. The search was restricted to refereed journal articles published in English from January 2000 to November 2017. Thirty-one articles met the inclusion criteria. Using a modified quality assessment tool, the median quality score across studies was 14/16. The most popular Bayesian statistical approach to dengue modelling was a generalised linear mixed model with spatial random effects described by a conditional autoregressive prior. A limited number of studies included spatio-temporal random effects. Temperature and precipitation were shown to often influence the risk of dengue. Developing spatio-temporal random-effect models, considering other priors, using a dataset that covers an extended time period, and investigating other covariates would help to better understand and control DF transmission.


2008 ◽  
Vol 139 (2_suppl) ◽  
pp. P33-P34
Author(s):  
Jeremy T. Reed ◽  
Shankar K. Sridhara ◽  
Scott E Brietzke

Objective Review and assess the current published literature regarding clinical outcomes of suction electrocautery adenoidectomy (ECA) in pediatric patients. Methods The MEDLINE database was systematically reviewed for articles reporting on the use of ECA. Inclusion criteria included English language, sample size greater than 5, and presentation of extractable data regarding pediatric outcomes with ECA. Random effects modeling was used to estimate summary outcomes. Results 9 studies met the inclusion criteria. There were 2 level 1b studies, 2 level 3b studies, and 5 level 4 studies. The mean sample size was 276 patients with a grand mean age of 6.0 years. Random effects modeling of summary estimates of intra-operative hemorrhage (4.1 cc vs. 24.0 cc 95% CI of difference = 16.5–23.1, p<0.001) and operative time (10.0 minutes vs. 11.9 minutes 95% CI of difference=0.82–2.90, p<0.001) favored ECA vs. traditional curette adenoidectomy. Subjective success was reported in 95.0% (95% CI=92.7–97.3%, p<0.001) of ECA patients with a grand mean of 5.8 months of postoperative follow-up and a grand mean lost to follow-up rate of 23.2%. Adenoid regrowth was evaluated objectively (endoscopy or X-ray) in only 116 of 2,132 (5.4%) total patients with an observed regrowth rate of 2.8% (95% CI=0–5.5%, p=0.052) with 846 total person years of follow-up. Conclusions The preponderance of evidence favors ECA versus curette adenoidectomy in terms of decreased intraoperative hemorrhage and decreased operative time. Long-term outcomes data for ECA are scarce, despite the fact that the procedure is likely performed hundreds of times each day, but suggest a low regrowth rate.


Biometrics ◽  
2016 ◽  
Vol 72 (4) ◽  
pp. 1369-1377
Author(s):  
Cornelis J. Potgieter ◽  
Rubin Wei ◽  
Victor Kipnis ◽  
Laurence S. Freedman ◽  
Raymond J. Carroll

2018 ◽  
Vol 2 (1) ◽  
Author(s):  
Nur Indah Lestari

ABSTRACT:  This study is conducted to estimate the impact of the increase in regular and specific excise rates structure simplification on cigar's consumption through its price. Using data in 2015 and applying random effect model for unbalanced panel data on Sigaret Kretek Mesin-type and Sigaret Kretek Tangan-type of the cigar, this study compares the impact of price increases due to both specific excise rate structure simplification and regular increase on the excise rate in cigar’s consumption. The results indicate that increase in the specific excise rate structure simplification has a lower impact on raising cigar’s prices than regular excise rate increases. Furthermore, the impact of price increases due to the specific excise rate structure simplification is greater in reducing cigar’s consumption than the price increases due to regular excise rate increases. In addition, it is found that the average price of Sigaret Kretek Mesin-type is lower and has an average consumption that is much higher than Sigaret Kretek Tangan-type. Overall, this result suggests that the specific excise rate structure simplification's policy should be continued in order to reduce cigar's consumption.Keywords: specific excise rate structure simplification, cigar’s consumption, random effects modelABSTRAK:  Penelitian ini dilakukan untuk mengetahui pengaruh kenaikan tarif cukai biasa maupun spesifik terhadap konsumsi rokok melalui harga jual ecerannya. Rokok yang digunakan adalah rokok jenis Sigaret Kretek Mesin (SKM) dan Sigaret Kretek Tangan (SKT). Dengan menggunakan data tahun 2015 dan menerapkan random effect pada unbalanced panel data, penelitian ini membandingkan pengaruh kenaikan harga jual eceran akibat penyederhanaan struktur tarif cukai spesifik dan kenaikan tarif cukai biasa terhadap konsumsi rokok. Hasil penelitian menunjukkan bahwa penyederhanaan struktur tarif cukai spesifik berpengaruh lebih rendah terhadap kenaikan harga jual eceran rokok dibandingkan dengan akibat kenaikan tarif biasa. Lebih lanjut, pengaruh kenaikan harga jual eceran akibat penyederhanaan struktur tarif cukai spesifik lebih besar dalam mengurangi konsumsi rokok dibandingkan kenaikan harga jual eceran akibat kenaikan tarif cukai biasa. Selain itu ditemukan bahwa harga jual eceran rata-rata rokok jenis Sigaret Kretek Mesin (SKM) lebih rendah dan mempunyai rata-rata konsumsi yang jauh lebih tinggi dibandingkan rokok jenis Sigaret Kretek Tangan (SKT). Secara menyeluruh, temuan ini menyarankan agar kebijakan penyederhanaan struktur tarif cukai perlu dilanjutkan karena efektif untuk mengurangi konsumsi rokok. Kata kunci: Penyederhanaan struktur tarif cukai spesifik, konsumsi rokok, random effects model.   


Stats ◽  
2018 ◽  
Vol 1 (1) ◽  
pp. 48-76
Author(s):  
Freddy Hernández ◽  
Viviana Giampaoli

Mixed models are useful tools for analyzing clustered and longitudinal data. These models assume that random effects are normally distributed. However, this may be unrealistic or restrictive when representing information of the data. Several papers have been published to quantify the impacts of misspecification of the shape of the random effects in mixed models. Notably, these studies primarily concentrated their efforts on models with response variables that have normal, logistic and Poisson distributions, and the results were not conclusive. As such, we investigated the misspecification of the shape of the random effects in a Weibull regression mixed model with random intercepts in the two parameters of the Weibull distribution. Through an extensive simulation study considering six random effect distributions and assuming normality for the random effects in the estimation procedure, we found an impact of misspecification on the estimations of the fixed effects associated with the second parameter σ of the Weibull distribution. Additionally, the variance components of the model were also affected by the misspecification.


Sign in / Sign up

Export Citation Format

Share Document