treatment effect estimate
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Peter May ◽  
Charles Normand ◽  
Danielle Noreika ◽  
Nevena Skoro ◽  
J. Brian Cassel

Abstract Background Economic research on hospital palliative care faces major challenges. Observational studies using routine data encounter difficulties because treatment timing is not under investigator control and unobserved patient complexity is endemic. An individual’s predicted LOS at admission offers potential advantages in this context. Methods We conducted a retrospective cohort study on adults admitted to a large cancer center in the United States between 2009 and 2015. We defined a derivation sample to estimate predicted LOS using baseline factors (N = 16,425) and an analytic sample for our primary analyses (N = 2674) based on diagnosis of a terminal illness and high risk of hospital mortality. We modelled our treatment variable according to the timing of first palliative care interaction as a function of predicted LOS, and we employed predicted LOS as an additional covariate in regression as a proxy for complexity alongside diagnosis and comorbidity index. We evaluated models based on predictive accuracy in and out of sample, on Akaike and Bayesian Information Criteria, and precision of treatment effect estimate. Results Our approach using an additional covariate yielded major improvement in model accuracy: R2 increased from 0.14 to 0.23, and model performance also improved on predictive accuracy and information criteria. Treatment effect estimates and conclusions were unaffected. Our approach with respect to treatment variable yielded no substantial improvements in model performance, but post hoc analyses show an association between treatment effect estimate and estimated LOS at baseline. Conclusion Allocation of scarce palliative care capacity and value-based reimbursement models should take into consideration when and for whom the intervention has the largest impact on treatment choices. An individual’s predicted LOS at baseline is useful in this context for accurately predicting costs, and potentially has further benefits in modelling treatment effects.



2021 ◽  
pp. 002203452110252
Author(s):  
M.C. Menne ◽  
G. Seitidis ◽  
C.M. Faggion ◽  
D. Mavridis ◽  
N. Pandis

Differences in effect estimates between early primary trials included in a meta-analysis and the pooled estimate of meta-analysis might indicate potential novelty bias. The objective of this study was to assess the presence of novelty bias in a sample of studies published in periodontology and implant dentistry. On August 7, 2020, we searched the PubMed database for meta-analyses of clinical studies published between August 2015 and August 2020. Meta-analyses with at least 4 primary studies were selected for assessment. We fitted logistic regression models using trial characteristics as predictors to assess the association between these characteristics and 1) the odds of the first trial’s estimate to be included in the meta-analysis confidence interval (CI) and 2) the odds of overlap between the first trial’s CI and the meta-analysis prediction interval (PI). Ninety-two meta-analyses provided data for assessment. In absolute values, 70% of the meta-analyses have a pooled estimate smaller than the corresponding estimate of the first trial, although there was overlap of the CI of estimates from the first trial and the meta-analysis in 87% of the cases. This is probably due to the small number of trials in most meta-analyses and the subsequently large uncertainty associated with the pooled effect estimate. As the number of trials in the meta-analysis increased, the odds of the treatment effect estimate of the first trial to be included in the meta-analysis CI decreased by 15% for every additional trial (odds ratio, 0.85; 95% CI, 0.73 to 0.96). Meta-analytic effect estimates appear to be more conservative than those from the first trial in the meta-analysis. Our findings show evidence of novelty bias in periodontology and implant dentistry; therefore, clinicians should be aware of the risk of making decisions based on the information reported in new trials because of the risk of exaggerated estimates in these trials.



2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Stella Erdmann ◽  
Marietta Kirchner ◽  
Heiko Götte ◽  
Meinhard Kieser

Abstract Background Go/no-go decisions after phase II and sample size chosen for phase III are usually based on phase II results (e.g., the treatment effect estimate of phase II). Due to the decision rule (only promising phase II results lead to phase III), treatment effect estimates from phase II that initiate a phase III trial commonly overestimate the true treatment effect. Underpowered phase III trials are the consequence. Optimistic findings may then not be reproduced, leading to the failure of potentially expensive drug development programs. For some disease areas these failure rates are described to be quite high: 62.5%. Methods We integrate the ideas of multiplicative and additive adjustment of treatment effect estimates after go decisions in a utility-based framework for optimizing drug development programs. The design of a phase II/III program, i.e., the “right amount of adjustment”, the allocation of the resources to phase II and III in terms of sample size, and the rule applied to decide whether to stop or to proceed with phase III influences its success considerably. Given specific drug development program characteristics (e.g., fixed and variable per patient costs for phase II and III, probable gain in case of market launch), optimal designs with respect to the maximal expected utility can be identified by the proposed Bayesian-frequentist approach. The method will be illustrated by application to practical examples characteristic for oncological studies. Results In general, our results show that the program set-ups with adjusted treatment effect estimate used for phase III planning are superior to the “naïve” program set-ups with respect to the maximal expected utility. Therefore, we recommend considering an adjusted phase II treatment effect estimate for the phase III sample size calculation. However, there is no one-fits-all design. Conclusion Individual drug development planning for a specific program is necessary to find the optimal design. The optimal choice of the design parameters for a specific drug development program at hand can be found by our user friendly R Shiny application and package (both assessable open-source via [1]).



2020 ◽  
Vol 39 ◽  
pp. 101865
Author(s):  
Katherine Riester ◽  
Ludwig Kappos ◽  
Krzysztof Selmaj ◽  
Stacy Lindborg ◽  
Ilya Lipkovich ◽  
...  


2019 ◽  
pp. 004912411985237
Author(s):  
Roberto V. Penaloza ◽  
Mark Berends

To measure “treatment” effects, social science researchers typically rely on nonexperimental data. In education, school and teacher effects on students are often measured through value-added models (VAMs) that are not fully understood. We propose a framework that relates to the education production function in its most flexible form and connects with the basic VAMs without using untenable assumptions. We illustrate how, due to measurement error (ME), cross-group imbalances created by nonrandom group assignment cause correlations that drive the models’ treatment-effect estimate bias. We derive formulas to calculate bias and rank the models and show that no model is better in all situations. The framework and formulas’ workings are verified and illustrated via simulation. We also evaluate the performance of latent variable/errors-in-variables models that handle ME and study the role of extra covariates including lags of the outcome.



F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 610 ◽  
Author(s):  
Theodoros Papakonstantinou ◽  
Adriani Nikolakopoulou ◽  
Gerta Rücker ◽  
Anna Chaimani ◽  
Guido Schwarzer ◽  
...  

In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The proportion contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the proportion that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to proportion contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the proportion contributions of direct evidence from individual studies to network treatment effects.



F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 610
Author(s):  
Theodoros Papakonstantinou ◽  
Adriani Nikolakopoulou ◽  
Gerta Rücker ◽  
Anna Chaimani ◽  
Guido Schwarzer ◽  
...  

In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The percentage contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the percentage that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to percentage contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the percentage contributions of direct evidence from individual studies to network treatment effects.



F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 610 ◽  
Author(s):  
Theodoros Papakonstantinou ◽  
Adriani Nikolakopoulou ◽  
Gerta Rücker ◽  
Anna Chaimani ◽  
Guido Schwarzer ◽  
...  

In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The percentage contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the percentage that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to percentage contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the percentage contributions of direct evidence from individual studies to network treatment effects.



2016 ◽  
Vol 27 (6) ◽  
pp. 1830-1846 ◽  
Author(s):  
Martin Posch ◽  
Florian Klinglmueller ◽  
Franz König ◽  
Frank Miller

Blinded sample size reassessment is a popular means to control the power in clinical trials if no reliable information on nuisance parameters is available in the planning phase. We investigate how sample size reassessment based on blinded interim data affects the properties of point estimates and confidence intervals for parallel group superiority trials comparing the means of a normal endpoint. We evaluate the properties of two standard reassessment rules that are based on the sample size formula of the z-test, derive the worst case reassessment rule that maximizes the absolute mean bias and obtain an upper bound for the mean bias of the treatment effect estimate.



2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Nicole Bohme Carnegie ◽  
Rui Wang ◽  
Victor De Gruttola

AbstractAn issue that remains challenging in the field of causal inference is how to relax the assumption of no interference between units. Interference occurs when the treatment of one unit can affect the outcome of another, a situation which is likely to arise with outcomes that may depend on social interactions, such as occurrence of infectious disease. Existing methods to accommodate interference largely depend upon an assumption of “partial interference” – interference only within identifiable groups but not among them. There remains a considerable need for development of methods that allow further relaxation of the no-interference assumption. This paper focuses on an estimand that is the difference in the outcome that one would observe if the treatment were provided to all clusters compared to that outcome if treatment were provided to none – referred as the overall treatment effect. In trials of infectious disease prevention, the randomized treatment effect estimate will be attenuated relative to this overall treatment effect if a fraction of the exposures in the treatment clusters come from individuals who are outside these clusters. This source of interference – contacts sufficient for transmission that are with treated clusters – is potentially measurable. In this manuscript, we leverage epidemic models to infer the way in which a given level of interference affects the incidence of infection in clusters. This leads naturally to an estimator of the overall treatment effect that is easily implemented using existing software.



Sign in / Sign up

Export Citation Format

Share Document