pragmatic trials
Recently Published Documents


TOTAL DOCUMENTS

234
(FIVE YEARS 86)

H-INDEX

24
(FIVE YEARS 6)

Trials ◽  
2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Miranda B. Olson ◽  
Ellen M. McCreedy ◽  
Rosa R. Baier ◽  
Renée R. Shield ◽  
Esme E. Zediker ◽  
...  

Abstract Background In pragmatic trials, on-site partners, rather than researchers, lead intervention delivery, which may result in implementation variation. There is a need to quantitatively measure this variation. Applying the Framework for Implementation Fidelity (FIF), we develop an approach for measuring variability in site-level implementation fidelity. This approach is then applied to measure site-level fidelity in a cluster-randomized pragmatic trial of Music & MemorySM (M&M), a personalized music intervention targeting agitated behaviors in residents living with dementia, in US nursing homes (NHs). Methods Intervention NHs (N = 27) implemented M&M using a standardized manual, utilizing provided staff trainings and iPods for participating residents. Quantitative implementation data, including iPod metadata (i.e., song title, duration, number of plays), were collected during baseline, 4-month, and 8-month site visits. Three researchers developed four FIF adherence dimension scores. For Details of Content, we independently reviewed the implementation manual and reached consensus on six core M&M components. Coverage was the total number of residents exposed to the music at each NH. Frequency was the percent of participating residents in each NH exposed to M&M at least weekly. Duration was the median minutes of music received per resident day exposed. Data elements were scaled and summed to generate dimension-level NH scores, which were then summed to create a Composite adherence score. NHs were grouped by tercile (low-, medium-, high-fidelity). Results The 27 NHs differed in size, resident composition, and publicly reported quality rating. The Composite score demonstrated significant variation across NHs, ranging from 4.0 to 12.0 [8.0, standard deviation (SD) 2.1]. Scaled dimension scores were significantly correlated with the Composite score. However, dimension scores were not highly correlated with each other; for example, the correlation of the Details of Content score with Coverage was τb = 0.11 (p = 0.59) and with Duration was τb = − 0.05 (p = 0.78). The Composite score correlated with CMS quality star rating and presence of an Alzheimer’s unit, suggesting face validity. Conclusions Guided by the FIF, we developed and used an approach to quantitatively measure overall site-level fidelity in a multi-site pragmatic trial. Future pragmatic trials, particularly in the long-term care environment, may benefit from this approach. Trial registration Clinicaltrials.gov NCT03821844. Registered on 30 January 2019, https://clinicaltrials.gov/ct2/show/NCT03821844.


2021 ◽  
Vol 125 (1) ◽  
pp. 89-92
Author(s):  
Brett L. Ecker ◽  
Brian C. Brajcich ◽  
Ryan J. Ellis ◽  
Clifford Y. Ko ◽  
Michael I. D'Angelica

Author(s):  
Cari Levy ◽  
Sheryl Zimmerman ◽  
Vincent Mor ◽  
David Gifford ◽  
Sherry A. Greenberg ◽  
...  

Author(s):  
Pascale Nevins ◽  
Shelley Vanderhout ◽  
Kelly Carroll ◽  
Stuart G. Nicholls ◽  
Seana N. Semchishen ◽  
...  

2021 ◽  
pp. 174077452110466
Author(s):  
Monica Taljaard ◽  
Fan Li ◽  
Bo Qin ◽  
Caroline Cui ◽  
Leyi Zhang ◽  
...  

Background and Aims We need more pragmatic trials of interventions to improve care and outcomes for people living with Alzheimer’s disease and related dementias. However, these trials present unique methodological challenges in their design, analysis, and reporting—often, due to the presence of one or more sources of clustering. Failure to account for clustering in the design and analysis can lead to increased risks of Type I and Type II errors. We conducted a review to describe key methodological characteristics and obtain a “baseline assessment” of methodological quality of pragmatic trials in dementia research, with a view to developing new methods and practical guidance to support investigators and methodologists conducting pragmatic trials in this field. Methods We used a published search filter in MEDLINE to identify trials more likely to be pragmatic and identified a subset that focused on people living with Alzheimer’s disease or other dementias or included them as a defined subgroup. Pairs of reviewers extracted descriptive information and key methodological quality indicators from each trial. Results We identified N = 62 eligible primary trial reports published across 36 different journals. There were 15 (24%) individually randomized, 38 (61%) cluster randomized, and 9 (15%) individually randomized group treatment designs; 54 (87%) trials used repeated measures on the same individual and/or cluster over time and 17 (27%) had a multivariate primary outcome (e.g. due to measuring an outcome on both the patient and their caregiver). Of the 38 cluster randomized trials, 16 (42%) did not report sample size calculations accounting for the intracluster correlation and 13 (34%) did not account for intracluster correlation in the analysis. Of the 9 individually randomized group treatment trials, 6 (67%) did not report sample size calculations accounting for intracluster correlation and 8 (89%) did not account for it in the analysis. Of the 54 trials with repeated measurements, 45 (83%) did not report sample size calculations accounting for repeated measurements and 19 (35%) did not utilize at least some of the repeated measures in the analysis. No trials accounted for the multivariate nature of their primary outcomes in sample size calculation; only one did so in the analysis. Conclusion There is a need and opportunity to improve the design, analysis, and reporting of pragmatic trials in dementia research. Investigators should pay attention to the potential presence of one or more sources of clustering. While methods for longitudinal and cluster randomized trials are well developed, accessible resources and new methods for dealing with multiple sources of clustering are required. Involvement of a statistician with expertise in longitudinal and clustered designs is recommended.


2021 ◽  
pp. medethics-2021-107765
Author(s):  
Jennifer Zhe Zhang ◽  
Stuart G Nicholls ◽  
Kelly Carroll ◽  
Hayden Peter Nix ◽  
Cory E Goldstein ◽  
...  

ObjectivesTo describe reporting of informed consent in pragmatic trials, justifications for waivers of consent and reporting of alternative approaches to standard written consent. To identify factors associated with (1) not reporting and (2) not obtaining consent.MethodsSurvey of primary trial reports, published 2014–2019, identified using an electronic search filter for pragmatic trials implemented in MEDLINE, and registered in ClinicalTrials.gov.ResultsAmong 1988 trials, 132 (6.6%) did not include a statement about participant consent, 1691 (85.0%) reported consent had been obtained, 139 (7.0%) reported a waiver and 26 (1.3%) reported consent for one aspect (eg, data collection) but a waiver for another (eg, intervention). Of the 165 trials reporting a waiver, 76 (46.1%) provided a justification. Few (53, 2.9%) explicitly reported use of alternative approaches to consent. In multivariable logistic regression analyses, lower journal impact factor (p=0.001) and cluster randomisation (p<0.0001) were significantly associated with not reporting on consent, while trial recency, cluster randomisation, higher-income country settings, health services research and explicit labelling as pragmatic were significantly associated with not obtaining consent (all p<0.0001).DiscussionNot obtaining consent seems to be increasing and is associated with the use of cluster randomisation and pragmatic aims, but neither cluster randomisation nor pragmatism are currently accepted justifications for waivers of consent. Rather than considering either standard written informed consent or waivers of consent, researchers and research ethics committees could consider alternative consent approaches that may facilitate the conduct of pragmatic trials while preserving patient autonomy and the public’s trust in research.


Author(s):  
Ellen McCreedy ◽  
Andrea Gilmore‐Bykovskyi ◽  
David A. Dorr ◽  
Julie Lima ◽  
Ellen P. McCarthy ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0258945
Author(s):  
Jemima A. Frimpong ◽  
Stéphane Helleringer

Exposure notification apps have been developed to assist in notifying individuals of recent exposures to SARS-CoV-2. However, in several countries, such apps have had limited uptake. We assessed whether strategies to increase downloads of exposure notification apps should emphasize improving the accuracy of the apps in recording contacts and exposures, strengthening privacy protections and/or offering financial incentives to potential users. In a discrete choice experiment with potential app users in the US, financial incentives were more than twice as important in decision-making about app downloads, than privacy protections, and app accuracy. The probability that a potential user would download an exposure notification app increased by 40% when offered a $100 reward to download (relative to a reference scenario in which the app is free). Financial incentives might help exposure notification apps reach uptake levels that improve the effectiveness of contact tracing programs and ultimately enhance efforts to control SARS-CoV-2. Rapid, pragmatic trials of financial incentives for app downloads in real-life settings are warranted.


Author(s):  
Rémy Boussageon ◽  
Jeremy Howick ◽  
Raphael Baron ◽  
florian naudet ◽  
bruno falissard ◽  
...  

Aim: The placebo effect and the specific effect are often thought to add up (additive model). Whether this is true or whether there is an interaction between the two, can modify the external validity of a trial. This assumption of additivity was tested by Kleijnen et al. in 1994 but the data produced since then has not been synthetized. In this review, we aimed to systematically review the literature to determine whether additivity held. Methods: We searched Medline and Psychinfo up to 10/01/2019. Studies using the balanced placebo design (BPD), testing two different strengths of placebos, were included. The presence of interaction was evaluated by comparing each group in BPD with analysis of variance or covariance. Results: 30 studies were included and the overall risk of bias was high: four found evidence of additivity and 16 studies found evidence of interaction (seven had evidence of positive additivity). Conclusion: Evidence of additivity between placebo and specific features of treatments was rare in our sample. For ailments that are placebo-responsive, pragmatic trials should be preferred to increase their external validity.


Sign in / Sign up

Export Citation Format

Share Document