scholarly journals Methods in Comparative Effectiveness Research

2012 ◽  
Vol 30 (34) ◽  
pp. 4208-4214 ◽  
Author(s):  
Katrina Armstrong

Comparative effectiveness research (CER) seeks to assist consumers, clinicians, purchasers, and policy makers to make informed decisions to improve health care at both the individual and population levels. CER includes evidence generation and evidence synthesis. Randomized controlled trials are central to CER because of the lack of selection bias, with the recent development of adaptive and pragmatic trials increasing their relevance to real-world decision making. Observational studies comprise a growing proportion of CER because of their efficiency, generalizability to clinical practice, and ability to examine differences in effectiveness across patient subgroups. Concerns about selection bias in observational studies can be mitigated by measuring potential confounders and analytic approaches, including multivariable regression, propensity score analysis, and instrumental variable analysis. Evidence synthesis methods include systematic reviews and decision models. Systematic reviews are a major component of evidence-based medicine and can be adapted to CER by broadening the types of studies included and examining the full range of benefits and harms of alternative interventions. Decision models are particularly suited to CER, because they make quantitative estimates of expected outcomes based on data from a range of sources. These estimates can be tailored to patient characteristics and can include economic outcomes to assess cost effectiveness. The choice of method for CER is driven by the relative weight placed on concerns about selection bias and generalizability, as well as pragmatic concerns related to data availability and timing. Value of information methods can identify priority areas for investigation and inform research methods.

2011 ◽  
Vol 25 (3) ◽  
pp. 191-209 ◽  
Author(s):  
Maria C. Katapodi ◽  
Laurel L. Northouse

The increased demand for evidence-based health care practices calls for comparative effectiveness research (CER), namely the generation and synthesis of research evidence to compare the benefits and harms of alternative methods of care. A significant contribution of CER is the systematic identification and synthesis of available research studies on a specific topic. The purpose of this article is to provide an overview of methodological issues pertaining to systematic reviews and meta-analyses to be used by investigators with the purpose of conducting CER. A systematic review or meta-analysis is guided by a research protocol, which includes (a) the research question, (b) inclusion and exclusion criteria with respect to the target population and studies, © guidelines for obtaining relevant studies, (d) methods for data extraction and coding, (e) methods for data synthesis, and (f ) guidelines for reporting results and assessing for bias. This article presents an algorithm for generating evidence-based knowledge by systematically identifying, retrieving, and synthesizing large bodies of research studies. Recommendations for evaluating the strength of evidence, interpreting findings, and discussing clinical applicability are offered.


2020 ◽  
Vol 55 (3) ◽  
pp. 217-228 ◽  
Author(s):  
Kenneth C. Lam ◽  
Cailee E. Welch Bacon ◽  
Eric L. Sauers ◽  
R. Curtis Bay

Context Recently, calls to conduct comparative effectiveness research (CER) in athletic training to better support patient care decisions have been circulated. Traditional research methods (eg, randomized controlled trials [RCTs], observational studies) may be ill suited for CER. Thus, innovative research methods are needed to support CER efforts. Objectives To discuss the limitations of traditional research designs in CER studies, describe a novel methodologic approach called the point-of-care clinical trial (POC-CT), and highlight components of the POC-CT (eg, incorporation of an electronic medical record [EMR], Bayesian adaptive feature) that allow investigators to conduct scientifically rigorous studies at the point of care. Description Practical concerns (eg, high costs and limited generalizability of RCTs, the inability to control for bias in observational studies) may stall CER efforts in athletic training. In short, the aim of the POC-CT is to embed a randomized pragmatic trial into routine care; thus, patients are randomized to minimize potential bias, but the study is conducted at the point of care to limit cost and improve the generalizability of the findings. Furthermore, the POC-CT uses an EMR to replace much of the infrastructure associated with a traditional RCT (eg, research team, patient and clinician reminders) and a Bayesian adaptive feature to help limit the number of patients needed for the study. Together, the EMR and Bayesian adaptive feature can improve the overall feasibility of the study and preserve the typical clinical experiences of the patient and clinician. Clinical Advantages The POC-CT includes the basic tenets of practice-based research because studies are conducted at the point of care, in real-life settings, and during routine clinical practice. If implemented effectively, the POC-CT can be seamlessly integrated into daily clinical practice, allowing investigators to establish patient-reported evidence that may be quickly applied to patient care decisions. This design appears to be a promising approach for CER investigations and may help establish a “learning health care system” in the sports medicine community.


2012 ◽  
Vol 16 (4) ◽  
pp. 323-337 ◽  
Author(s):  
Julia Kreis ◽  
Milo A. Puhan ◽  
Holger J. Schünemann ◽  
Kay Dickersin

BMC Medicine ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Van Thu Nguyen ◽  
Mishelle Engleton ◽  
Mauricia Davison ◽  
Philippe Ravaud ◽  
Raphael Porcher ◽  
...  

Abstract Background To assess the completeness of reporting, research transparency practices, and risk of selection and immortal bias in observational studies using routinely collected data for comparative effectiveness research. Method We performed a meta-research study by searching PubMed for comparative effectiveness observational studies evaluating therapeutic interventions using routinely collected data published in high impact factor journals from 01/06/2018 to 30/06/2020. We assessed the reporting of the study design (i.e., eligibility, treatment assignment, and the start of follow-up). The risk of selection bias and immortal time bias was determined by assessing if the time of eligibility, the treatment assignment, and the start of follow-up were synchronized to mimic the randomization following the target trial emulation framework. Result Seventy-seven articles were identified. Most studies evaluated pharmacological treatments (69%) with a median sample size of 24,000 individuals. In total, 20% of articles inadequately reported essential information of the study design. One-third of the articles (n = 25, 33%) raised some concerns because of unclear reporting (n = 6, 8%) or were at high risk of selection bias and/or immortal time bias (n = 19, 25%). Only five articles (25%) described a solution to mitigate these biases. Six articles (31%) discussed these biases in the limitations section. Conclusion Reporting of essential information of study design in observational studies remained suboptimal. Selection bias and immortal time bias were common methodological issues that researchers and physicians should be aware of when interpreting the results of observational studies using routinely collected data.


Sign in / Sign up

Export Citation Format

Share Document