scholarly journals Validating and Updating GRASP: An Evidence-Based Framework for Grading and Assessment of Clinical Predictive Tools

2020 ◽  
Author(s):  
Mohamed Khalifa ◽  
Farah Magrabi ◽  
Blanca Gallego

Abstract Background: When selecting predictive tools, clinicians are challenged with an overwhelming and ever-growing number, most of which have never been implemented or evaluated for comparative effectiveness. The authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP). The objective of this study is to update GRASP and evaluate its reliability.Methods: A web-based survey was developed to collect responses of a wide international group of experts, who published studies on clinical prediction tools. Experts were invited via email and their responses were quantitatively and qualitatively analysed using NVivo software. The interrater reliability of the framework, to assign grades to eight predictive tools by two independent users, was evaluated.Results: We received 81 valid responses. On five-points Likert scale, experts overall strongly agreed with GRASP evaluation criteria=4.35/5, SD=1.01, 95%CI [4.349, 4.354]. Experts strongly agreed with six criteria: predictive performance=4.88/5, SD=0.43, 95%CI [4.87, 488] and evidence levels of predictive performance=4.44/5, SD=0.87, 95%CI [4.44, 4.45], usability=4.68/5, SD=0.70, 95%CI [4.67, 4.68] and potential effect=4.62/5, SD=0.68, 95%CI [4.61, 4.62], post-implementation impact=4.78/5, SD=0.57, 95%CI [4.78, 4.79] and evidence direction=4.25/5, SD=0.78, 95%CI [4.25, 4.26]. Experts somewhat agreed with one criterion: post-implementation impact levels=4.18/5, SD=1.14, 95%CI [4.17, 4.19]. Experts were neutral about one criterion; usability is higher than potential effect=2.96/5, SD=1.23, 95%CI [2.95, 2.97]. Sixty-four respondents provided recommendations to six open-ended questions regarding updating evaluation criteria. Forty-three suggested potential effect is higher than usability. Experts highlighted the importance of quality of studies and strength of evidence. Accordingly, GRASP concept and its detailed report were updated. The framework’s interrater reliability was tested, and two independent reviewers produced accurate and consistent results in grading eight predictive tools using the framework.Conclusion: Before implementation, internal and external validation of predictive performance of tools is essential in evaluating sensitivity and specificity. During planning for implementation, potential effect is more important that usability to evaluate acceptability of tools by users. Post-implementation, it is crucial to evaluate tools’ impact on healthcare processes and clinical outcomes. The GRASP framework aims to provide clinicians with a high-level, evidence-based, and comprehensive, yet simple and feasible, approach to evaluate, compare, and select predictive tools.

Author(s):  
Mohamed Khalifa ◽  
Farah Magrabi ◽  
Blanca Gallego

Abstract Background: When selecting predictive tools, clinicians are challenged with an overwhelming and ever-growing number, most of which have never been implemented or evaluated for comparative effectiveness. To overcome this challenge, the authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP). The objective of this study is to update GRASP and evaluate its reliability. Methods: An online survey was developed to collect responses of a wide international group of experts, who published studies on developing, implementing or evaluating clinical decision support tools. The interrater reliability of the framework, to assign grades to eight predictive tools by two independent users, was evaluated. Results: Among 882 invited experts, 81 provided valid responses. On a five-points Likert scale, experts overall strongly agreed to GRASP evaluation criteria (4.35). Experts strongly agreed to six criteria: predictive performance (4.87) and predictive performance levels (4.44), usability (4.68) and potential effect (4.61), post-implementation impact (4.78) and evidence direction (4.26). Experts somewhat agreed to one criterion: post-implementation impact levels (4.16). Experts were neutral about one criterion; usability is higher than potential effect (2.97). Sixty-four respondents provided recommendations to open-ended questions regarding adding, removing or changing evaluation criteria. Forty-three respondents suggested the potential effect should be higher than the usability. Experts highlighted the importance of reporting quality of studies and strength of evidence supporting grades assigned to predictive tools. Accordingly, GRASP concept and its detailed report were updated. The updated framework’s interrater reliability, to produce accurate and consistent results by two independent users, was tested and found to be initially reliable. Conclusion: The GRASP framework grades predictive tools based on critical appraisal of published evidence across three dimensions: phase of evaluation, level of evidence, and direction of evidence. The final grade of a tool is based on the highest phase of evaluation, supported by the highest level of positive evidence, or mixed evidence that supports positive conclusion. GRASP aims to provide clinicians with a high-level, evidence-based, and comprehensive, yet simple and feasible, approach to evaluate predictive tools, considering their predictive performance before implementation, usability and potential effect during planning for implementation, and post-implementation impact on healthcare outcomes.


Author(s):  
Mohamed Khalifa ◽  
Farah Magrabi ◽  
Blanca Gallego

Abstract Background Clinical predictive tools quantify contributions of relevant patient characteristics to derive likelihood of diseases or predict clinical outcomes. When selecting predictive tools for implementation at clinical practice or for recommendation in clinical guidelines, clinicians are challenged with an overwhelming and ever-growing number of tools, most of which have never been implemented or assessed for comparative effectiveness. To overcome this challenge, we have developed a conceptual framework to Grade and Assess Predictive tools (GRASP) that can provide clinicians with a standardised, evidence-based system to support their search for and selection of efficient tools. Methods A focused review of the literature was conducted to extract criteria along which tools should be evaluated. An initial framework was designed and applied to assess and grade five tools: LACE Index, Centor Score, Well’s Criteria, Modified Early Warning Score, and Ottawa knee rule. After peer review, by six expert clinicians and healthcare researchers, the framework and the grading of the tools were updated. Results GRASP framework grades predictive tools based on published evidence across three dimensions: 1) Phase of evaluation; 2) Level of evidence; and 3) Direction of evidence. The final grade of a tool is based on the highest phase of evaluation, supported by the highest level of positive evidence, or mixed evidence that supports a positive conclusion. Ottawa knee rule had the highest grade since it has demonstrated positive post-implementation impact on healthcare. LACE Index had the lowest grade, having demonstrated only pre-implementation positive predictive performance. Conclusion GRASP framework builds on widely accepted concepts to provide standardised assessment and evidence-based grading of predictive tools. Unlike other methods, GRASP is based on the critical appraisal of published evidence reporting the tools’ predictive performance before implementation, potential effect and usability during implementation, and their post-implementation impact. Implementing the GRASP framework as an online platform can enable clinicians and guideline developers to access standardised and structured reported evidence of existing predictive tools. However, keeping GRASP reports up-to-date would require updating tools’ assessments and grades when new evidence becomes available, which can only be done efficiently by employing semi-automated methods for searching and processing the incoming information.


Author(s):  
Sari Hakkarainen ◽  
Darijus Strasunskas ◽  
Lillian Hella ◽  
Stine Tuxen

Ontology is the core component in Semantic Web applications. The employment of an ontology building method affects the quality of ontology and the applicability of ontology language. A weighted classification approach for ontology building guidelines is presented in this chapter. The evaluation criteria are based on an existing classification scheme of a semiotic framework for evaluating the quality of conceptual models. A sample of Web-based ontology building method guidelines is evaluated in general and experimented with using data from a case study in particular. Directions for further refinement of ontology building methods are discussed.


2012 ◽  
Vol 52 (2) ◽  
pp. 678
Author(s):  
Steven McIntyre

Strategic and operational management in the exploration and production business is characterised by prediction and decision making in a data-rich, high-uncertainty environment. Analysis of predictive performance since the 1970s by multiple researchers indicates that predictions are subject to over-confidence and optimism negatively impacting performance. The situation is the same for other areas of human endeavour also operating within data-rich, high-uncertainty environments. Research in the fields of psychology and neuroscience indicates the way in which the human brain perceives, integrates and allocates significance to data is the cause. Significant effort has been dedicated to improving the quality of predictions. Many individual companies review their predictive performance during long periods, but few share their data or analysis with the industry at large. Data that is shared is generally presented at a high level, reducing transparency and making it difficult to link the analysis to the geology and data from which predictions are derived. This extended abstract presents an analysis of predictive performance from the Eromanga Basin where pre-drill predictions and detailed production data during a period of decades is available in the public domain, providing an opportunity to test the veracity of past observations and conclusions. Analysis of the dataset indicates that predictions made using both deterministic and probabilistic methodologies have been characterised by over-confidence and optimism. The reasons for this performance are discussed and suggestions for improving predictive capability provided.


2006 ◽  
Vol 24 (18_suppl) ◽  
pp. 8573-8573 ◽  
Author(s):  
M. N. Neuss ◽  
J. O. Jacobson ◽  
C. Earle ◽  
C. E. Desch ◽  
K. McNiff ◽  
...  

8573 Background: Little is known about the quality of end-of-life (EOL) care provided to cancer patients, with data largely available only from administrative databases. QOPI is a practice-based system of quality self-assessment now available to any ASCO physician wishing to participate. QOPI methodology allows comparison of EOL care among practices and provides a basis for self-improvement. Methods: In Summer 2005, during the pilot phase of QOPI, several EOL questions were included in the survey instrument. Practices were requested to review the records of at least 15 patients who had died. Practice members performed standardized chart abstractions and data were entered directly on to a secure web-based application. A total of 455 charts were abstracted from 22 practices. Results: See table. Conclusion : QOPI provides an effective mechanism for collecting practice-specific EOL data. Aggregate data from the 22 QOPI pilot practices demonstrate a high level of performance compared with results reported from population-based studies. Significant variation among practices is present, representing an opportunity to improve the EOL care of cancer patients. [Table: see text] No significant financial relationships to disclose.


2021 ◽  
Vol 3 (3) ◽  
pp. 994-1056
Author(s):  
Rodolfo Paolucci ◽  
André Pereira Neto

The Internet is a major source of health information, but the poor quality of the information has been criticized for decades. We looked at methods for assessing the quality of health information, updating the findings of the first systematic review from 2002. We searched 9 Health Sciences, Information Sciences, and multidisciplinary databases for studies. We identified 7,718 studies and included 299. Annual publications increased from 9 (2001) to 53 (2013), with 89% from developed countries. We identified 20 areas of knowledge. Six tools have been used worldwide, but 43% of the studies did not use any of them. The methodological framework of criteria from the first review has been the same. The authors were the evaluators in 80% of the studies. This field of evaluation is expanding. No instrument simultaneously covers the evaluation criteria. There is still a need for a methodology involving experts and users and evidence-based indicators of accuracy.


2020 ◽  
Vol 04 (03) ◽  
pp. 334-342
Author(s):  
Ahmed Elsakka ◽  
Hooman Yarmohammadi

AbstractMalignant ascites negatively impacts patient's quality of life and has significant impact on the health care resources. Majority of management guidelines are based on systemic reviews that have predominately relied on retrospective data. Therefore, there is lack of high-level evidence-based studies. In this review, the etiologies, pathophysiology, and various treatment methods including diuretic therapy, large volume paracentesis, indwelling catheter placement, peritoneovenous shunt, transjugular intrahepatic portosystemic shunt, and other available novel and/or experimental options are reviewed.


10.2196/13833 ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. e13833 ◽  
Author(s):  
Nuša Faric ◽  
Henry W W Potts ◽  
Adrian Hon ◽  
Lee Smith ◽  
Katie Newby ◽  
...  

Background Physical activity (PA) is associated with a variety of physical and psychosocial health benefits, but levels of moderate-to-vigorous intensity PA remain low worldwide. Virtual reality (VR) gaming systems involving movement (VR exergames) could be used to engage people in more PA. Objective This study aimed to synthesize public reviews of popular VR exergames to identify common features that players liked or disliked to inform future VR exergame design. Methods We conducted a thematic analysis of 498 reviews of the 29 most popular exergames sold in the top 3 VR marketplaces: Steam (Valve Corporation), Viveport (Valve Corporation), and Oculus (Oculus VR). We categorized reviews as positive and negative as they appeared in the marketplaces and identified the most common themes using an inductive thematic analysis. Results The reviews were often mixed, reporting a wide variety of expectations, preferences, and gaming experiences. Players preferred highly realistic games (eg, closely simulated real-world sport), games that were intuitive (in terms of body movement and controls), and games that provided gradual increases in skill acquisition. Players reported feeling that they reached a high level of exertion when playing and that the immersion distracted them from the intensity of the exercise. Some preferred features included music and social aspects of the games, with multiplayer options to include friends or receive help from experienced players. There were 3 main themes in negative reviews. The first concerned bugs that rendered games frustrating. Second, the quality of graphics had a particularly strong impact on perceived enjoyment. Finally, reviewers disliked when games had overly complex controls and display functions that evoked motion sickness. Conclusions Exergames prove to be a stimulating avenue for players to engage in PA and distract themselves from the negative perceptions of performing exercise. The common negative aspects of VR exergames should be addressed for increased uptake and continued engagement.


2008 ◽  
Vol 2008 ◽  
pp. 1-6
Author(s):  
K. Afshar ◽  
A. E. MacNeily

There are many ongoing controversies surrounding vesicoureteral reflux (VUR). These include variable aspects of this common congenital anomaly. Lack of evidence-based recommendations has prolonged the debate. Systematic reviews (SRs) and meta-analysis (MA) are considered high-level evidence. The purpose of this review article is to summarize and critically appraise the available SR/MA pertaining to VUR. We also discuss the strength and pitfalls of SR/MA in general. A thorough literature search identified 9 SRs/MAs relevant to VUR. Both authors critically reviewed these articles for contents and methodological issues. There are many concerns about the quality of the studies included in these SRs. Clinical heterogeneity stemming from different patient selection criteria, interventions, and outcome definitions is a major issue. In spite of major advances in understanding different aspects of VUR in the last few decades, there is a paucity of randomized controlled trials in this field.


2019 ◽  
Vol 214 ◽  
pp. 01049
Author(s):  
Alexey Anisenkov ◽  
Daniil Zhadan ◽  
Ivan Logashenko

A comprehensive and efficient environment and data monitoring system is a vital part of any HEP experiment. In this paper we describe the software web-based framework which is currently used by the CMD-3 Collaboration at the VEPP-2000 Collider and partially by the Muon g-2 experiment at Fermilab to monitor the status of data acquisition and the quality of data taken by the experiments. The system is designed to meet typical requirements and cover various use-cases of DAQ applications, starting from the central configuration, slow control data monitoring, data quality monitoring, user-oriented visualization, control of the hardware and DAQ processes, etc. Being an intermediate middleware between the front-end electronics and the DAQ applications the system is focused to provide a high-level coherent view for shifters and experts for robust operations. In particular, it is used to integrate various experiment dependent monitoring modules and tools into a unified Web oriented portal with appropriate access control policy. The paper describes the design and overall architecture of the system, recent developments and the most important aspects of the framework implementation.


Sign in / Sign up

Export Citation Format

Share Document