treatment effects
Recently Published Documents


TOTAL DOCUMENTS

4665
(FIVE YEARS 1131)

H-INDEX

115
(FIVE YEARS 15)

2022 ◽  
Vol 54 (8) ◽  
pp. 1-36
Author(s):  
Weijia Zhang ◽  
Jiuyong Li ◽  
Lin Liu

A central question in many fields of scientific research is to determine how an outcome is affected by an action, i.e., to estimate the causal effect or treatment effect of an action. In recent years, in areas such as personalised healthcare, sociology, and online marketing, a need has emerged to estimate heterogeneous treatment effects with respect to individuals of different characteristics. To meet this need, two major approaches have been taken: treatment effect heterogeneity modelling and uplifting modelling. Researchers and practitioners in different communities have developed algorithms based on these approaches to estimate the heterogeneous treatment effects. In this article, we present a unified view of these two seemingly disconnected yet closely related approaches under the potential outcome framework. We provide a structured survey of existing methods following either of the two approaches, emphasising their inherent connections and using unified notation to facilitate comparisons. We also review the main applications of the surveyed methods in personalised marketing, personalised medicine, and sociology. Finally, we summarise and discuss the available software packages and source codes in terms of their coverage of different methods and applicability to different datasets, and we provide general guidelines for method selection.


2022 ◽  
pp. 81-101
Author(s):  
Cheng Hsiao ◽  
Yan Shen ◽  
Qiankun Zhou

2022 ◽  
Author(s):  
Michail Belias ◽  
Maroeska M. Rovers ◽  
Jeroen Hoogland ◽  
Johannes B. Reitsma ◽  
Thomas P. A. Debray ◽  
...  

2022 ◽  
pp. 001316442110688
Author(s):  
Yasuo Miyazaki ◽  
Akihito Kamata ◽  
Kazuaki Uekawa ◽  
Yizhi Sun

This paper investigated consequences of measurement error in the pretest on the estimate of the treatment effect in a pretest–posttest design with the analysis of covariance (ANCOVA) model, focusing on both the direction and magnitude of its bias. Some prior studies have examined the magnitude of the bias due to measurement error and suggested ways to correct it. However, none of them clarified how the direction of bias is affected by measurement error. This study analytically derived a formula for the asymptotic bias for the treatment effect. The derived formula is a function of the reliability of the pretest, the standardized population group mean difference for the pretest, and the correlation between pretest and posttest true scores. It revealed a concerning consequence of ignoring measurement errors in pretest scores: treatment effects could be overestimated or underestimated, and positive treatment effects can be estimated as negative effects in certain conditions. A simulation study was also conducted to verify the derived bias formula.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Mei Liu ◽  
Wen Wang ◽  
Mingqi Wang ◽  
Qiao He ◽  
Ling Li ◽  
...  

Abstract Background In recent years, studies that used routinely collected data (RCD), such as electronic medical records and administrative claims, for exploring drug treatment effects, including effectiveness and safety, have been increasingly published. Abstracts of such studies represent a highly attended source for busy clinicians or policy-makers, and are important for indexing by literature database. If less clearly presented, they may mislead decisions or indexing. We thus conducted a cross-sectional survey to systematically examine how the abstracts of such studies were reported. Methods We searched PubMed to identify all observational studies published in 2018 that used RCD for assessing drug treatment effects. Teams of methods-trained collected data from eligible studies using pilot-tested, standardized forms that were developed and expanded from “The reporting of studies conducted using observational routinely collected health data statement for pharmacoepidemiology” (RECORD-PE) statement. We used descriptive analyses to examine how authors reported data source, study design, data analysis, and interpretation of findings. Results A total of 222 studies were included, of which 118 (53.2%) reported type of database used, 17 (7.7%) clearly reported database linkage, and 140 (63.1%) reported coverage of data source. Only 44 (19.8%) studies stated a predefined hypothesis, 127 (57.2%) reported study design, 140 (63.1%) reported statistical models used, 142 (77.6%) reported adjusted estimates, 33 (14.9%) mentioned sensitivity analyses, and 39 (17.6%) made a strong claim about treatment effect. Studies published in top 5 general medicine journals were more likely to report the name of data source (94.7% vs. 67.0%) and study design (100% vs. 53.2%) than those in other journals. Conclusions The under-reporting of key methodological features in abstracts of RCD studies was common, which would substantially compromise the indexing of this type of literature and prevent the effective use of study findings. Substantial efforts to improve the reporting of abstracts in these studies are highly warranted.


Sign in / Sign up

Export Citation Format

Share Document