scholarly journals The use of computerized clinical decision support systems in emergency care: a substantive review of the literature

2016 ◽  
Vol 24 (3) ◽  
pp. 655-668 ◽  
Author(s):  
Paula Bennett ◽  
Nicholas R Hardiker

Objectives: This paper provides a substantive review of international literature evaluating the impact of computerized clinical decision support systems (CCDSSs) on the care of emergency department (ED) patients. Material and Methods: A literature search was conducted using Medline, Cumulative Index of Nursing and Allied Health Literature (CINAHL), Embase electronic resources, and gray literature. Studies were selected if they compared the use of a CCDSS with usual care in a face-to-face clinical interaction in an ED. Results: Of the 23 studies included, approximately half demonstrated a statistically significant positive impact on aspects of clinical care with the use of CCDSSs. The remaining studies showed small improvements, mainly around documentation. However, the methodological quality of the studies was poor, with few or no controls to mitigate against confounding variables. The risk of bias was high in all but 6 studies. Discussion: The ED environment is complex and does not lend itself to robust quantitative designs such as randomized controlled trials. The quality of the research in ∼75% of the studies was poor, and therefore conclusions cannot be drawn from these results. However, the studies with a more robust design show evidence of the positive impact of CCDSSs on ED patient care. Conclusion: This is the first review to consider the role of CCDSSs in emergency care and expose the research in this area. The role of CCDSSs in emergency care may provide some solutions to the current challenges in EDs, but further high-quality research is needed to better understand what technological solutions can offer clinicians and patients.

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Julia Amann ◽  
◽  
Alessandro Blasimme ◽  
Effy Vayena ◽  
Dietmar Frey ◽  
...  

Abstract Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.


Sign in / Sign up

Export Citation Format

Share Document