evidence rating
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 9)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Elpida Kontsioti ◽  
Simon Maskell ◽  
Amina Bensalem ◽  
Bhaskar Dutta ◽  
Munir Pirmohamed

AIM: To explore the level of agreement on drug-drug interaction (DDI) information listed in three major online drug information resources (DIRs) in terms of: (1) interacting drug pairs; (2) severity rating; (3) evidence rating and (4) clinical management recommendations. METHODS: We extracted DDI information from the British National Formulary (BNF), Thesaurus, and Micromedex. Following drug name normalisation, we estimated the overlap of the DIRs. We annotated clinical management recommendations either manually, where possible, or through application of a machine learning algorithm. RESULTS: The DIRs contained 51,481 (BNF), 38,037 (Thesaurus), and 65,446 (Micromedex) drug pairs involved in DDIs. The number of common DDIs across the three DIRs was 6,970 (13.54% of BNF, 18.32% of Thesaurus, and 10.65% of Micromedex). Micromedex and Thesaurus overall showed higher levels of similarity in their severity ratings, while the BNF agreed more with Micromedex on the critical severity ratings and with Thesaurus on the least significant ones. Evidence rating agreement between BNF and Micromedex was generally poor. Variation in clinical management recommendations was also identified, with some categories (i.e. Monitor and Adjust dose) showing higher levels of agreement compared to others (i.e. Use with caution, Wash-out, Modify administration). CONCLUSIONS: There is considerable variation in the DDIs included in the examined DIRs, together with variability in categorisation of severity and clinical advice given. DDIs labelled as critical are more likely to appear in multiple DIRs. Such variability in information could have deleterious consequences for patient safety, and there is a need for harmonisation and standardisation.


2021 ◽  
Author(s):  
Sara Hoy ◽  
Björg Helgadóttir ◽  
Åsa Norman

Abstract Background: To address the effectiveness and sustainability of school-based interventions, there is a need to consider factors affecting implementation success. This rapidly growing field of implementation focused research is struggling with how to assess and measure implementation-relevant constructs. Earlier research identifies the need for strong psychometric and pragmatic measures. The aims of this review are therefore to (i) systematically review the literature to identify measurements of factors influencing implementation which have been developed or adapted in school settings, (ii) describe each measurement’s psychometric and pragmatic properties, (iii) describe the alignment between each measurement and corresponding domain and/or construct of the Consolidated Framework for Implementation Research (CFIR).Methods: Six databases (Medline, ERIC, PsycInfo, Cinahl, Embase, and Web of Science) will be searched for peer-reviewed articles reporting on primary and secondary school settings, published from year 2000. Our search string will be built on three core levels of terms for: (a) implementation, (b) measurement, and (c) school settings. Two independent researchers will screen articles and extract data, with a third researcher cross-checking the process. The identified measurements will be mapped against CFIR domains and constructs, and analyzed for their psychometric and pragmatic properties using The Psychometric and Pragmatic Evidence Rating Scale (PAPERS), as well as for their measurement equivalence (invariance). The protocol follows the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P).Discussion: By identifying measurements that are psychometrically and pragmatically strong in the field, this review can contribute to the identification of feasible, effective, and sustainable implementations strategies. Through highlighting the gaps within the range of constructs in the tools retrieved, this review may provide insights for future research and where resources could be allocated. Combined, this review will provide a greater understanding of factors influencing implementation of initiatives within school settings, and how these can be further studied.Registration: This review is submitted to be prospectively registered on PROSPERO ID: 284741. Submission occurred the 12th of October 2021.


2021 ◽  
pp. ebmental-2020-300170
Author(s):  
Linan Zeng ◽  
Romina Brignardello-Petersen ◽  
Gordon Guyatt

The Grades of Recommendation, Assessment, Development and Evaluation’ (GRADE) offers a widely adopted, transparent and structured process for developing and presenting summaries of evidence, including the certainty of evidence, for systematic reviews and recommendations in healthcare. GRADE defined certainty of evidence as ‘the extent of our confidence that the estimates of the effect are correct (in the context of systematic review), or are adequate to support a particular decision or recommendation (in the context of guideline)’. Realising the incoherence in the conceptualisation, the GRADE working group re-clarified the certainty of evidence as ‘the certainty that a true effect lies on one side of a specified threshold, or within a chosen range’. Following the new concept, in the context of both systematic reviews and health technology assessments, it is desirable for GRADE users to specify the thresholds and clarify of which effect they are certain. To help GRADE users apply GRADE in accordance with the new conceptualisation, GRADE defines three levels of contextualisation: minimally, partially and fully contextualised approaches, and provides possible thresholds for each level of contextualisation. In this article, we will use a hypothetic systematic review to illustrate the application of the minimally and partially contextualised approaches, and discuss the application of a fully contextualised approach in deciding how we are rating our certainty (i.e.target of the rating of certainty of evidence).


BJPsych Open ◽  
2021 ◽  
Vol 7 (S1) ◽  
pp. S302-S302
Author(s):  
Jennifer Wood ◽  
Sarah Verity

AimsAs the number of survivors of childhood brain tumor grows, fatigue is being increasingly recorded as a long-term consequence of both the cancer itself and the treatment received. Survivors of childhood brain tumour report more significant fatigue than children with other cancers, often impacting all aspects of life, including academic attainment, self-concept and social relationships with peers, leading to reduced health-related quality of life.This study aimed to systematically evaluate the evidence for fatigue in paediatric brain tumour survivors.MethodA systematic search using EMBASE, MEDLINE and PsycINFO identified 20 papers meeting the inclusion criteria. Scientific rigor was used throughout by following Scottish Intercollegiate Guidelines Network (2015) guidance for systematic reviews. Quality Assessment of Evidence Rating tool - Fatigue (QAERT) was developed with substantial inter-rater agreement found.Result19 of the 20 studies reviewed showed conclusive evidence of fatigue in survivors of paediatric brain tumour. One study offered adequate evidence that there was no difference in levels of fatigue in paediatric cancer survivors, including survivors of paediatric brain tumour, when compared to healthy controls. Three studies found that fatigue was worse in survivors of paediatric brain tumour when compared to survivors of other paediatric cancersConclusionThis review provides evidence for the presence of fatigue in survivors of paediatric brain tumour. However, the construct of fatigue was poorly defined throughout, with fatigue associated with physical effects of treatment and fatigue associated with long-term cognitive impairment not distinguished. This poor construct validity, coupled with a lack of comparison groups in 12 of the 20 studies, reduces the generalizability of the data and its usefulness for developing effective psychological interventions. Further research is needed, built on a clear fatigue construct definition, and including well defined exclusion criteria, to provide a sound basis for improving the quality of life of these children.


2021 ◽  
Vol 2 ◽  
pp. 263348952110373
Author(s):  
Cara C Lewis ◽  
Kayne D Mettert ◽  
Cameo F Stanick ◽  
Heather M Halko ◽  
Elspeth A Nolen ◽  
...  

To rigorously measure the implementation of evidence-based interventions, implementation science requires measures that have evidence of reliability and validity across different contexts and populations. Measures that can detect change over time and impact on outcomes of interest are most useful to implementers. Moreover, measures that fit the practical needs of implementers could be used to guide implementation outside of the research context. To address this need, our team developed a rating scale for implementation science measures that considers their psychometric and pragmatic properties and the evidence available. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) can be used in systematic reviews of measures, in measure development, and to select measures. PAPERS may move the field toward measures that inform robust research evaluations and practical implementation efforts.


2020 ◽  
Author(s):  
Alberto Frutos Pérez-Surio ◽  
José Manuel Vinuesa Hernando ◽  
Mercedes Arene Mendoza ◽  
María Ángeles Allende Bandrés ◽  
María Aránzazu Alcácera López ◽  
...  

Abstract Background: Evidence-rating systems (ERSs) provide a framework for the systematic evaluation of the quality of individual interventional or observational studies and the overall body of evidence in meta-analyses. Authors and users of meta-analyses require a familiarity with ERSs to determine the level of confidence in the application of results. Many ERSs have been published, but no consensus exists regarding best practice for their use. Objective: The aim is to describe patterns of use of ERSs in meta-analyses of drug therapy published in contemporary high-impact medical journals.Methods: We design a review. Medline / PubMed was searched to identify meta-analyses evaluating drug therapy from the top 5 ranked general medical journals from 2012 to 2016. Methods of full-texts were reviewed to ensure the meta-analyses evaluated drug therapy and to identify the ERS used to rate individual studies and the overall body of evidence. Frequency of ERS use was analyzed using descriptive statistics.Results: The top-ranked journals were Ann Intern Med, BMJ, JAMA, Lancet and PLoS Medicine. Of the 309 results, manual review excluded 111 meta-analyses. In 198 evaluated meta-analyses, 86.4% (171) utilized an ERS; the most commonly used was the Cochrane Risk of Bias Tool in 80.7% (138) of all meta-analyses. An ERS was used to evaluate the body of literature in 19.1% (38) of meta-analyses; the most commonly used of three systems was the GRADE methodology. Overall, 14 unique ERSs, including author-defined systems, were usedConclusions: Most meta-analyses of drug effects in high-impact medical journals evaluated individual studies with an ERS, most commonly the Cochrane Risk of Bias Tool, while the use of ERSs to evaluate the body of literature was less frequent. The familiarity of authors and users of meta-analyses with commonly used ERSs may facilitate the evaluation and application of findings of meta-analyses.


Author(s):  
Cameo F Stanick ◽  
Heather M Halko ◽  
Elspeth A Nolen ◽  
Byron J Powell ◽  
Caitlin N Dorsey ◽  
...  

Abstract The use of reliable, valid measures in implementation practice will remain limited without pragmatic measures. Previous research identified the need for pragmatic measures, though the characteristic identification used only expert opinion and literature review. Our team completed four studies to develop a stakeholder-driven pragmatic rating criteria for implementation measures. We published Studies 1 (identifying dimensions of the pragmatic construct) and 2 (clarifying the internal structure) that engaged stakeholders—participants in mental health provider and implementation settings—to identify 17 terms/phrases across four categories: Useful, Compatible, Acceptable, and Easy. This paper presents Studies 3 and 4: a Delphi to ascertain stakeholder-prioritized dimensions within a mental health context, and a pilot study applying the rating criteria. Stakeholders (N = 26) participated in a Delphi and rated the relevance of 17 terms/phrases to the pragmatic construct. The investigator team further defined and shortened the list, which were piloted with 60 implementation measures. The Delphi confirmed the importance of all pragmatic criteria, but provided little guidance on relative importance. The investigators removed or combined terms/phrases to obtain 11 criteria. The 6-point rating system assigned to each criterion demonstrated sufficient variability across items. The grey literature did not add critical information. This work produced the first stakeholder-driven rating criteria to assess whether measures are pragmatic. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) combines the pragmatic criteria with psychometric rating criteria, from previous work. Use of PAPERS can inform development of implementation measures and to assess the quality of existing measures.


AORN Journal ◽  
2019 ◽  
Vol 110 (1) ◽  
pp. 5-8
Author(s):  
Amber Wood
Keyword(s):  

2019 ◽  
Vol 109 ◽  
pp. 80-89
Author(s):  
R. Brian Haynes ◽  
Dalton Budhram ◽  
John Cherian ◽  
Emma Iserman ◽  
Alfonso Iorio ◽  
...  

2018 ◽  
Vol 29 (2) ◽  
pp. 174-177
Author(s):  
Sushma Reddy ◽  
Angelo Polito ◽  
Sandra Staveski ◽  
Heidi Dalton

AbstractThere are substantial knowledge gaps, practice variation, and paucity of controlled trials owing to the relatively small number of patients with critical heart disease. The Pediatric Cardiac Intensive Care Society has recognised this knowledge gap as an area needing a more comprehensive and evidence-based approach to the management of the critically ill child with heart disease. To address this, the Pediatric Cardiac Intensive Care Society created a scientific statements and white papers committee. Scientific statements and white papers will present the current state-of-the-art in areas where controversy exists, providing clinicians with guidance in diagnostic and therapeutic strategies, particularly where evidence-based data are lacking. This paper provides a template for other societies and organisations faced with the task of developing scientific statements and white papers. We describe the methods used to perform a systematic literature search and evidence rating that will be used by all scientific statements and white papers emerging from the Pediatric Cardiac Intensive Care Society. The Pediatric Cardiac Intensive Care Society aims to revolutionise the care of children with heart disease by shifting our efforts from individual institution-based practices to national standardised protocols and to lay the ground work for multicentre high-impact research directions.


Sign in / Sign up

Export Citation Format

Share Document