scholarly journals Rheumatic? - A Digital Diagnostic Decision Support Tool for Individuals Suspecting Rheumatic Diseases: A Multicenter Validation Study

Author(s):  
Rachel Knevel ◽  
Johannes Knitza ◽  
Aase Hensvold ◽  
Alexandra Circiumaru ◽  
Tor Bruce ◽  
...  

Abstract BackgroundDigital diagnostic decision support tools promise to accelerate diagnosis and increase health care efficiency in rheumatology. Rheumatic? is an online tool developed by specialists in rheumatology and general medicine together with patients and patient organizations. It calculates a risk score for several rheumatic diseases. In the current pilot study, we retrospectively test Rheumatic? for its ability to differentiate symptoms from immune-mediated diseases from other rheumatic and musculoskeletal complaints and disorders in patients visiting rheumatology clinics. MethodsThe performance of Rheumatic? was tested using data from 175 patients from three university rheumatology centers covering two different settings:A. Risk-RA phase setting. Here, we tested whether Rheumatic? could predict the development of arthritis in 50 individuals with musculoskeletal complaints and anti-citrullinated protein antibody positivity from the KI (Karolinska Institutet)B. Early arthritis setting. Here, we tested whether Rheumatic? could predict the development of an immune-mediated rheumatic disease in i) EUMC (Erlangen) n=52 patients and ii) LUMC (Leiden) n=73 patients.In each setting, we examined the discriminative power of the total score with the Wilcoxon rank test and the area-under-the-receiver-operating-characteristic curve (AUC-ROC). Next, we calculated the test characteristics for these patients at a rheumatology setting passing the first or second threshold for at least one of the rheumatic diseases.ResultsThe total test score clearly differentiated between: A) individuals developing arthritis or not, median 245 versus 163, P < 0.0001, AUC-ROC = 75.3B) patients with an immune-mediated arthritic disease or not, in EUMC median 191 versus 107, P < 0.0001, AUC-ROC = 79.0, and LUMC median 262 versus 212, P < 0.0001, AUC-ROC = 53.6.Threshold-1 (advising on seeking primary care doctor) was highly specific in two centers (0.72, 0.87 and 0.23, respectively) and far more sensitive (0.67, 0.61 and 0.67) in KI, EUMC respectively LUMC.Threshold-2 was very specific in all three centers but not very sensitive: specificity of 1.0, 0.96 and 0.91, sensitivity 0.05, 0.07, 0.14 in KI, EUMC respectively LUMC. ConclusionsRheumatic? is a web-based patient-centered multilingual diagnostic tool capable of differentiating immune-mediated rheumatic conditions from other musculoskeletal problems. The scoring system might be further optimized, for which we will perform a prospective study.

2021 ◽  
Vol 80 (Suppl 1) ◽  
pp. 87.1-88
Author(s):  
R. Knevel ◽  
J. Knitza ◽  
A. Hensvold ◽  
A. Circiumaru ◽  
T. Bruce ◽  
...  

Background:Digital diagnostic decision support tools promise to accelerate diagnosis and increase health care efficiency in rheumatology. Rheumatic? is an online tool developed by specialists in rheumatology and general medicine together with patients and patient organizations for individuals suspecting a rheumatic disease.1,2 The tool can be used by people suspicious for rheumatic diseases resulting in individual advise on eventually seeking further health care.Objectives:We tested Rheumatic? for its ability to differentiate symptoms from immune-mediated diseases from other rheumatic and musculoskeletal complaints and disorders in patients visiting rheumatology clinics.Methods:The performance of Rheumatic? was tested using data from 175 patients from three university rheumatology centers covering two different settings:A.Risk-RA phase setting. Here, we tested whether Rheumatic? could predict the development of arthritis in 50 at risk-individuals with musculoskeletal complaints and anti-citrullinated protein antibody positivity from the KI (Karolinska Institutet)B.Early arthritis setting. Here, we tested whether Rheumatic? could predict the development of an immune-mediated rheumatic disease in i) EUMC (Erlangen) n=52 patients and ii) LUMC (Leiden) n=73 patients.In each setting, we examined the discriminative power of the total score with the Wilcoxon rank test and the area-under-the-receiver-operating-characteristic curve (AUC-ROC).Results:In setting A, the total test score clearly differentiated between individuals developing arthritis or not, median 245 versus 163, P < 0.0001, AUC-ROC = 75.3 (Figure 1). Also within patients with arthritis the Rheumatic? total score was significantly higher in patients developing an immune-mediated arthritic disease versus those who did not: median score EUMC 191 versus 107, P < 0.0001, AUC-ROC = 79.0, and LUMC 262 versus 212, P < 0.0001, AUC-ROC = 53.6.Figure 1.(Area under) the receiver operating curve for the total Rheumatic? scoreConclusion:Rheumatic? is a web-based patient-centered multilingual diagnostic tool capable of differentiating immune-mediated rheumatic conditions from other musculoskeletal problems. A following subject of research is how the tool performs in a population-wide setting.References:[1]Knitza J. et al. Mobile Health in Rheumatology: A Patient Survey Study Exploring Usage, Preferences, Barriers and eHealth Literacy. JMIR mHealth and uHealth. 2020.[2]https://rheumatic.elsa.science/en/Acknowledgements:This project has received funding from EIT Health. EIT Health is supported by the European Institute of Innovation and Technology (EIT), a body of the European Union that receives support from the European Union’s Horizon 2020 Research and Innovation program.This project has received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No 777357, RTCure.Disclosure of Interests:Rachel Knevel: None declared, Johannes Knitza: None declared, Aase Hensvold: None declared, Alexandra Circiumaru: None declared, Tor Bruce Employee of: Ocean Observations, Sebastian Evans Employee of: Elsa Science, Tjardo Maarseveen: None declared, Marc Maurits: None declared, Liesbeth Beaart- van de Voorde: None declared, David Simon: None declared, Arnd Kleyer: None declared, Martina Johannesson: None declared, Georg Schett: None declared, Thomas Huizinga: None declared, Sofia Svanteson Employee of: Elsa Science, Alexandra Lindfors Employee of: Ocean Observations, Lars Klareskog: None declared, Anca Catrina: None declared


2012 ◽  
Vol 4 (2) ◽  
pp. 227-231 ◽  
Author(s):  
Mitchell J. Feldman ◽  
Edward P. Hoffer ◽  
G. Octo Barnett ◽  
Richard J. Kim ◽  
Kathleen T. Famiglietti ◽  
...  

Abstract Background Computer-based medical diagnostic decision support systems have been used for decades, initially as stand-alone applications. More recent versions have been tested for their effectiveness in enhancing the diagnostic ability of clinicians. Objective To determine if viewing a rank-ordered list of diagnostic possibilities from a medical diagnostic decision support system improves residents' differential diagnoses or management plans. Method Twenty first-year internal medicine residents at Massachusetts General Hospital viewed 3 deidentified case descriptions of real patients. All residents completed a web-based questionnaire, entering the differential diagnosis and management plan before and after seeing the diagnostic decision support system's suggested list of diseases. In all 3 exercises, the actual case diagnosis was first on the system's list. Each resident served as his or her own control (pretest/posttest). Results For all 3 cases, a substantial percentage of residents changed their primary considered diagnosis after reviewing the system's suggested diagnoses, and a number of residents who had not initially listed a “further action” (laboratory test, imaging study, or referral) added or changed their management options after using the system. Many residents (20% to 65% depending on the case) improved their differential diagnosis from before to after viewing the system's suggestions. The average time to complete all 3 cases was 15.4 minutes. Most residents thought that viewing the medical diagnostic decision support system's list of suggestions was helpful. Conclusion Viewing a rank-ordered list of diagnostic possibilities from a diagnostic decision support tool had a significant beneficial effect on the quality of first-year medicine residents' differential diagnoses and management plans.


2007 ◽  
Vol 13 (1_suppl) ◽  
pp. 44-46 ◽  
Author(s):  
T O Bola Odufuwa ◽  
Lola Solebo ◽  
Sancy Low

Isabel is a Web-based, diagnostic decision support tool designed to provide a differential diagnosis of a patient's condition for interpretation by a qualified health-care professional. We investigated the accuracy of the Isabel system in ophthalmic primary care. A total of 100 case histories were prospectively collected from ophthalmic primary care clinic records. The patient demographics and clinical features of each case were then entered into the Isabel system, and the results generated by the decision support tool for each case were compared with the diagnosis reached by the ophthalmic team. Of the 100 cases in the dataset, there was no matching diagnosis in the first 2 pages of Isabel results in 40 cases. Of the 60 cases in which there was a matching diagnosis on the first 2 pages of results, 31 had a >50% match between the terms of the query and the Isabel diagnosis reminder system's database. It remains to be established whether this is high enough to be clinically useful in a practice setting. Inclusion of specific ophthalmic knowledge would probably improve the accuracy of the Isabel clinical diagnostic decision support system.


2020 ◽  
Vol 10 (3) ◽  
pp. 103
Author(s):  
David Gallagher ◽  
Congwen Zhao ◽  
Amanda Brucker ◽  
Jennifer Massengill ◽  
Patricia Kramer ◽  
...  

Unplanned hospital readmissions represent a significant health care value problem with high costs and poor quality of care. A significant percentage of readmissions could be prevented if clinical inpatient teams were better able to predict which patients were at higher risk for readmission. Many of the current clinical decision support models that predict readmissions are not configured to integrate closely with the electronic health record or alert providers in real-time prior to discharge about a patient’s risk for readmission. We report on the implementation and monitoring of the Epic electronic health record—“Unplanned readmission model version 1”—over 2 years from 1/1/2018–12/31/2019. For patients discharged during this time, the predictive capability to discern high risk discharges was reflected in an AUC/C-statistic at our three hospitals of 0.716–0.760 for all patients and 0.676–0.695 for general medicine patients. The model had a positive predictive value ranging from 0.217–0.248 for all patients. We also present our methods in monitoring the model over time for trend changes, as well as common readmissions reduction strategies triggered by the score.


2021 ◽  
Vol 23 (1) ◽  
Author(s):  
Johannes Knitza ◽  
Koray Tascilar ◽  
Eva Gruber ◽  
Hannah Kaletta ◽  
Melanie Hagen ◽  
...  

Abstract Background An increasing number of diagnostic decision support systems (DDSS) exist to support patients and physicians in establishing the correct diagnosis as early as possible. However, little evidence exists that supports the effectiveness of these DDSS. The objectives were to compare the diagnostic accuracy of medical students, with and without the use of a DDSS, and the diagnostic accuracy of the DDSS system itself, regarding the typical rheumatic diseases and to analyze the user experience. Methods A total of 102 medical students were openly recruited from a university hospital and randomized (unblinded) to a control group (CG) and an intervention group (IG) that used a DDSS (Ada – Your Health Guide) to create an ordered diagnostic hypotheses list for three rheumatic case vignettes. Diagnostic accuracy, measured as the presence of the correct diagnosis first or at all on the hypothesis list, was the main outcome measure and evaluated for CG, IG, and DDSS. Results The correct diagnosis was ranked first (or was present at all) in CG, IG, and DDSS in 37% (40%), 47% (55%), and 29% (43%) for the first case; 87% (94%), 84% (100%), and 51% (98%) in the second case; and 35% (59%), 20% (51%), and 4% (51%) in the third case, respectively. No significant benefit of using the DDDS could be observed. In a substantial number of situations, the mean probabilities reported by the DDSS for incorrect diagnoses were actually higher than for correct diagnoses, and students accepted false DDSS diagnostic suggestions. DDSS symptom entry greatly varied and was often incomplete or false. No significant correlation between the number of symptoms extracted and diagnostic accuracy was seen. It took on average 7 min longer to solve a case using the DDSS. In IG, 61% of students compared to 90% in CG stated that they could imagine using the DDSS in their future clinical work life. Conclusions The diagnostic accuracy of medical students was superior to the DDSS, and its usage did not significantly improve students’ diagnostic accuracy. DDSS usage was time-consuming and may be misleading due to prompting wrong diagnoses and probabilities. Trial registration DRKS.de, DRKS00024433. Retrospectively registered on February 5, 2021.


Author(s):  
Martina Mugnano ◽  
Pasquale Memmolo ◽  
Lisa Miccio ◽  
Francesco Merola ◽  
Vittorio Bianco ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document