Author(s):  
David José Murteira Mendes ◽  
Irene Pimenta Rodrigues ◽  
César Fonseca

A question answering system to help clinical practitioners in a cardiovascular healthcare environment to interface clinical decision support systems can be built by using an extended discourse representation structure, CIDERS, and an ontology framework, Ontology for General Clinical Practice. CIDERS is an extension of the well-known DRT (discourse representation theory) structures, intending to go beyond single text representation to embrace the general clinical history of a given patient represented in an ontology. The Ontology for General Clinical Practice improves the currently available state-of-the-art ontologies for medical science and for the cardiovascular specialty. The chapter shows the scientific and philosophical reasons of its present dual structure with a deeply expressive (SHOIN) terminological base (TBox) and a highly computable (EL++) assertions knowledge base (ABox). To be able to use the current reasoning techniques and methodologies, the authors made a thorough inventory of biomedical ontologies currently available in OWL2 format.


Author(s):  
David José Murteira Mendes ◽  
Irene Pimenta Rodrigues ◽  
César Fonseca

A question answering system to help clinical practitioners in a cardiovascular healthcare environment to interface clinical decision support systems can be built by using an extended discourse representation structure, CIDERS, and an ontology framework, Ontology for General Clinical Practice. CIDERS is an extension of the well-known DRT (discourse representation theory) structures, intending to go beyond single text representation to embrace the general clinical history of a given patient represented in an ontology. The Ontology for General Clinical Practice improves the currently available state-of-the-art ontologies for medical science and for the cardiovascular specialty. The chapter shows the scientific and philosophical reasons of its present dual structure with a deeply expressive (SHOIN) terminological base (TBox) and a highly computable (EL++) assertions knowledge base (ABox). To be able to use the current reasoning techniques and methodologies, the authors made a thorough inventory of biomedical ontologies currently available in OWL2 format.


2021 ◽  
pp. 019459982110045
Author(s):  
Taylor C. Standiford ◽  
Janice L. Farlow ◽  
Michael J. Brenner ◽  
Marisa L. Conte ◽  
Jeffrey E. Terrell

Objective To offer practical, evidence-informed knowledge on clinical decision support systems (CDSSs) and their utility in improving care and reducing costs in otolaryngology–head and neck surgery. This primer on CDSSs introduces clinicians to both the capabilities and the limitations of this technology, reviews the literature on current state, and seeks to spur further progress in this area. Data Sources PubMed/MEDLINE, Embase, and Web of Science. Review Methods Scoping review of CDSS literature applicable to otolaryngology clinical practice. Investigators identified articles that incorporated knowledge-based computerized CDSSs to aid clinicians in decision making and workflow. Data extraction included level of evidence, Osheroff classification of CDSS intervention type, otolaryngology subspecialty or domain, and impact on provider performance or patient outcomes. Conclusions Of 3191 studies retrieved, 11 articles met formal inclusion criteria. CDSS interventions included guideline or protocols support (n = 8), forms and templates (n = 5), data presentation aids (n = 2), and reactive alerts, reference information, or order sets (all n = 1); 4 studies had multiple interventions. CDSS studies demonstrated effectiveness across diverse domains, including antibiotic stewardship, cancer survivorship, guideline adherence, data capture, cost reduction, and workflow. Implementing CDSSs often involved collaboration with health information technologists. Implications for Practice While the published literature on CDSSs in otolaryngology is finite, CDSS interventions are proliferating in clinical practice, with roles in preventing medical errors, streamlining workflows, and improving adherence to best practices for head and neck disorders. Clinicians may collaborate with information technologists and health systems scientists to develop, implement, and investigate the impact of CDSSs in otolaryngology.


2019 ◽  
Author(s):  
Tim Hahn ◽  
Ulrich Ebner-Priemer ◽  
Andreas Meyer-Lindenberg

With Artificial Intelligence (AI) technology advancing at a breathtaking speed – nearing application stage in many fields of medicine – the need for regulation ensuring quality, utility and security of the emerging AI-based clinical decision support systems becomes increasingly pressing. Here, we suggest a conceptual framework from which to derive requirements for building, validating, deploying, and managing AI-based systems in daily clinical practice.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Julia Amann ◽  
◽  
Alessandro Blasimme ◽  
Effy Vayena ◽  
Dietmar Frey ◽  
...  

Abstract Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.


Sign in / Sign up

Export Citation Format

Share Document