Incidence of diagnostic errors in unplanned hospitalized patients using an automated medical history-taking system with differential diagnosis generator: retrospective observational study (Preprint)

2021 ◽  
Author(s):  
Ren Kawamura ◽  
Yukinori Harada ◽  
Shu Sugimoto ◽  
Yuichiro Nagase ◽  
Shinichi Katsukura ◽  
...  

BACKGROUND Automated medical history-taking systems that generate differential diagnosis lists have been suggested to contribute to improved diagnostic accuracy. However, the effect of this system on diagnostic errors in clinical practice remains unknown. OBJECTIVE This study aimed to assess the incidence of diagnostic errors in an outpatient department, where an artificial intelligence (AI)-driven automated medical history-taking system that generates differential diagnosis lists was implemented in clinical practice. METHODS We conducted a retrospective observational study using data from a community hospital in Japan. We included patients aged 20 and older who used an AI-driven automated medical history-taking system that generates differential diagnosis lists in the outpatient department of internal medicine for whom the index visit was between July 1, 2019, and June 30, 2020, followed by unplanned hospitalization within 14 days. The primary endpoint was the incidence of diagnostic errors, which were detected using the Revised Safer Dx instrument by at least two independent reviewers. To evaluate the differential diagnosis list of AI on the incidence of diagnostic errors, we compared the incidence of diagnostic errors between the groups in which AI generated the final diagnosis in the differential diagnosis list and in which AI did not generate the final diagnosis in the differential diagnosis list; Fisher’s exact test was used for comparison between these groups. For cases with confirmed diagnostic errors, further review was conducted to identify the contributing factors of diagnostic errors via discussion among the three reviewers, using the Safer Dx Process Breakdown Supplement as a reference. RESULTS A total of 146 patients were analyzed. The final diagnosis was confirmed in 138 patients and the final diagnosis was observed in the differential diagnosis list of the AI in 69 patients. Diagnostic errors occurred in 16 of 146 patients (11.0%; 95% confidence interval, 6.4-17.2%). Although statistically insignificant, the incidence of diagnostic errors was lower in cases in which the final diagnosis was included in the differential diagnosis list of AI than in cases in which the final diagnosis was not included (7.2% vs. 15.9%, P=.18). Regarding the quality of clinical history taken by AI, the final diagnosis was easily assumed by reading only the clinical history taken by the system in 11 of 16 cases (68.8%). CONCLUSIONS The incidence of diagnostic errors in the internal medicine outpatients used an automated medical history-taking system that generates differential diagnosis lists seemed to be lower than the previously reported incidence of diagnostic errors. This result suggests that the implementation of an automated medical history-taking system that generates differential diagnosis lists could be beneficial for diagnostic safety in the outpatient department of internal medicine.

2019 ◽  
Vol 18 (01) ◽  
pp. 007-012
Author(s):  
Jatinder S. Goraya

AbstractSpells are a common clinical problem in children and can be broadly classified into epileptic and nonepileptic spells. Epileptic spells are clinical events that result from abnormal, excessive, and synchronous electrical activity of the cortical neurons. All other spells are included under the category of nonepileptic events. Precise differentiation between epileptic and nonepileptic spells, and their final characterization depend chiefly on obtaining a detailed account of the episode from the patient and/or witness. Physical and neurological examinations are generally non-revealing. In clinical practice, however, misdiagnosis of nonepileptic spells as epilepsy is fairly common and often is a result of incomplete history-taking. Explicit guidelines to elicit a thorough history in children who present with spells are lacking. The purpose of this article is to describe an instinctive and easy-to-remember approach to clinical history-taking in children with spells so as to minimize diagnostic errors.


2019 ◽  
Author(s):  
Mehmet Guven Gunver ◽  
Eray Yurtseven

UNSTRUCTURED Medical history taking is one of the most difficult topics in medicine. The ways in which patient medical history is taken and interpreted varies greatly and may be impacted by the bias of the clinician. For this reason, the process is thought of as an art, rather than a science. In this study, we sought to determine how clinicians categorize the outcome of medical history taking in relations to patient maternal and paternal disease history, the patient own disease history and their current occupation [1]. Clinicians were invited to participate in the survey from eighteen (18) university hospitals dispersed throughout fourteen (14) provinces in Turkey. This sample therefore represented 1270 clinicians representing the specializations of otology, general surgery, internal medicine, cardiology, pulmonology and psychiatry. The researchers obtained responses from seventy seven (77) clinicians or approximately six percent (6%).


2020 ◽  
Author(s):  
Yukinori Harada ◽  
Taro Shimizu

BACKGROUND Patient waiting time at outpatient departments is directly related to patient satisfaction and quality of care, particularly in patients visiting the general internal medicine outpatient departments for the first time. Moreover, reducing wait time from arrival in the clinic to the initiation of an examination is key to reducing patients’ anxiety. The use of automated medical history–taking systems in general internal medicine outpatient departments is a promising strategy to reduce waiting times. Recently, Ubie Inc in Japan developed AI Monshin, an artificial intelligence–based, automated medical history–taking system for general internal medicine outpatient departments. OBJECTIVE We hypothesized that replacing the use of handwritten self-administered questionnaires with the use of AI Monshin would reduce waiting times in general internal medicine outpatient departments. Therefore, we conducted this study to examine whether the use of AI Monshin reduced patient waiting times. METHODS We retrospectively analyzed the waiting times of patients visiting the general internal medicine outpatient department at a Japanese community hospital without an appointment from April 2017 to April 2020. AI Monshin was implemented in April 2019. We compared the median waiting time before and after implementation by conducting an interrupted time-series analysis of the median waiting time per month. We also conducted supplementary analyses to explain the main results. RESULTS We analyzed 21,615 visits. The median waiting time after AI Monshin implementation (74.4 minutes, IQR 57.1) was not significantly different from that before AI Monshin implementation (74.3 minutes, IQR 63.7) (<i>P</i>=.12). In the interrupted time-series analysis, the underlying linear time trend (–0.4 minutes per month; <i>P</i>=.06; 95% CI –0.9 to 0.02), level change (40.6 minutes; <i>P</i>=.09; 95% CI –5.8 to 87.0), and slope change (–1.1 minutes per month; <i>P</i>=.16; 95% CI –2.7 to 0.4) were not statistically significant. In a supplemental analysis of data from 9054 of 21,615 visits (41.9%), the median examination time after AI Monshin implementation (6.0 minutes, IQR 5.2) was slightly but significantly longer than that before AI Monshin implementation (5.7 minutes, IQR 5.0) (<i>P</i>=.003). CONCLUSIONS The implementation of an artificial intelligence–based, automated medical history–taking system did not reduce waiting time for patients visiting the general internal medicine outpatient department without an appointment, and there was a slight increase in the examination time after implementation; however, the system may have enhanced the quality of care by supporting the optimization of staff assignments.


Author(s):  
Yukinori Harada ◽  
Shinichi Katsukura ◽  
Ren Kawamura ◽  
Taro Shimizu

Background: The efficacy of artificial intelligence (AI)-driven automated medical-history-taking systems with AI-driven differential-diagnosis lists on physicians’ diagnostic accuracy was shown. However, considering the negative effects of AI-driven differential-diagnosis lists such as omission (physicians reject a correct diagnosis suggested by AI) and commission (physicians accept an incorrect diagnosis suggested by AI) errors, the efficacy of AI-driven automated medical-history-taking systems without AI-driven differential-diagnosis lists on physicians’ diagnostic accuracy should be evaluated. Objective: The present study was conducted to evaluate the efficacy of AI-driven automated medical-history-taking systems with or without AI-driven differential-diagnosis lists on physicians’ diagnostic accuracy. Methods: This randomized controlled study was conducted in January 2021 and included 22 physicians working at a university hospital. Participants were required to read 16 clinical vignettes in which the AI-driven medical history of real patients generated up to three differential diagnoses per case. Participants were divided into two groups: with and without an AI-driven differential-diagnosis list. Results: There was no significant difference in diagnostic accuracy between the two groups (57.4% vs. 56.3%, respectively; p = 0.91). Vignettes that included a correct diagnosis in the AI-generated list showed the greatest positive effect on physicians’ diagnostic accuracy (adjusted odds ratio 7.68; 95% CI 4.68–12.58; p < 0.001). In the group with AI-driven differential-diagnosis lists, 15.9% of diagnoses were omission errors and 14.8% were commission errors. Conclusions: Physicians’ diagnostic accuracy using AI-driven automated medical history did not differ between the groups with and without AI-driven differential-diagnosis lists.


10.2196/21056 ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. e21056
Author(s):  
Yukinori Harada ◽  
Taro Shimizu

Background Patient waiting time at outpatient departments is directly related to patient satisfaction and quality of care, particularly in patients visiting the general internal medicine outpatient departments for the first time. Moreover, reducing wait time from arrival in the clinic to the initiation of an examination is key to reducing patients’ anxiety. The use of automated medical history–taking systems in general internal medicine outpatient departments is a promising strategy to reduce waiting times. Recently, Ubie Inc in Japan developed AI Monshin, an artificial intelligence–based, automated medical history–taking system for general internal medicine outpatient departments. Objective We hypothesized that replacing the use of handwritten self-administered questionnaires with the use of AI Monshin would reduce waiting times in general internal medicine outpatient departments. Therefore, we conducted this study to examine whether the use of AI Monshin reduced patient waiting times. Methods We retrospectively analyzed the waiting times of patients visiting the general internal medicine outpatient department at a Japanese community hospital without an appointment from April 2017 to April 2020. AI Monshin was implemented in April 2019. We compared the median waiting time before and after implementation by conducting an interrupted time-series analysis of the median waiting time per month. We also conducted supplementary analyses to explain the main results. Results We analyzed 21,615 visits. The median waiting time after AI Monshin implementation (74.4 minutes, IQR 57.1) was not significantly different from that before AI Monshin implementation (74.3 minutes, IQR 63.7) (P=.12). In the interrupted time-series analysis, the underlying linear time trend (–0.4 minutes per month; P=.06; 95% CI –0.9 to 0.02), level change (40.6 minutes; P=.09; 95% CI –5.8 to 87.0), and slope change (–1.1 minutes per month; P=.16; 95% CI –2.7 to 0.4) were not statistically significant. In a supplemental analysis of data from 9054 of 21,615 visits (41.9%), the median examination time after AI Monshin implementation (6.0 minutes, IQR 5.2) was slightly but significantly longer than that before AI Monshin implementation (5.7 minutes, IQR 5.0) (P=.003). Conclusions The implementation of an artificial intelligence–based, automated medical history–taking system did not reduce waiting time for patients visiting the general internal medicine outpatient department without an appointment, and there was a slight increase in the examination time after implementation; however, the system may have enhanced the quality of care by supporting the optimization of staff assignments.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Morgan Decker ◽  
Jacob Graham ◽  
Mark Stephens

Issue:  Medical education traditionally focuses on basic science during the first two years of medical school. To “flip” this model, the Penn State College of Medicine has introduced and inquiry-based educational strategy that introduces students to the challenges of patient care immediately upon their arrival.  To engage students in a process to promote clinical reasoning, we have modified an Analytic Decision Game (ADG) called “EpiCentre” to address a notional public health crisis facing Centre County, Pennsylvania. Methods: In phase 1 of the activity, students are provided with materials describing the ethnography and infrastructure of Centre County.  Students are divided into three communities (teams) to create a strength, weakness, threat, opportunity (SWOT) analysis of local healthcare capabilities.  In phase 2 of the activity, students meet with a standardized patient presenting with a targeted medical complaint. They are pushed to think about their approach to taking a medical history and asked to generate a differential diagnosis.  In phases 3 and 4, students are faced with the challenge of triaging a number of patients with similar medical complaints and create a plan to deal with a likely outbreak scenario. Findings:  Students have found the EpiCentre activity to be worthwhile in multiple contexts.  They have been able to develop an initial approach to medical history taking and creating a differential diagnosis.  They have formulated an approach to the recognition and control of a potential public health crisis. An additional benefit of the exercise has been the overarching theme of teamwork. Students begin the activity (occurring in the first few weeks after arrival to medical school) as relative strangers and quickly develop a sense of camaraderie and mission focus. Conclusions: The EpiCentre ADG has been a successful activity to introduce medical students to Centre County in the context of healthcare infrastructure, an approach to medical history taking, disaster planning, clinical reasoning and team-building. Implications:  EpiCentre derives from an interprofessional collaboration between the College of Medicine and the College of Information Sciences and Technology.  It represents one of potentially limitless opportunities to engage students and faculty from multiple disciplines to address challenges of public health within the academic setting.


Diagnosis ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Taro Shimizu

Abstract Diagnostic errors are an internationally recognized patient safety concern, and leading causes are faulty data gathering and faulty information processing. Obtaining a full and accurate history from the patient is the foundation for timely and accurate diagnosis. A key concept underlying ideal history acquisition is “history clarification,” meaning that the history is clarified to be depicted as clearly as a video, with the chronology being accurately reproduced. A novel approach is presented to improve history-taking, involving six dimensions: Courtesy, Control, Compassion, Curiosity, Clear mind, and Concentration, the ‘6 C’s’. We report a case that illustrates how the 6C approach can improve diagnosis, especially in relation to artificial intelligence tools that assist with differential diagnosis.


Author(s):  
Christine Arnold ◽  
Sarah Berger ◽  
Nadine Gronewold ◽  
Denise Schwabe ◽  
Burkhard Götsch ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document