scholarly journals Artificial Intelligence in Cardiac Imaging

2020 ◽  
Vol 13 (2) ◽  
pp. 110-116 ◽  
Author(s):  
Karthik Seetharam ◽  
Sirish Shrestha ◽  
Partho P Sengupta

Machine learning (ML), a subset of artificial intelligence, is showing promising results in cardiology, especially in cardiac imaging. ML algorithms are allowing cardiologists to explore new opportunities and make discoveries not seen with conventional approaches. This offers new opportunities to enhance patient care and open new gateways in medical decision-making. This review highlights the role of ML in cardiac imaging for precision phenotyping and prognostication of cardiac disorders.

2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Claudio Lucchiari ◽  
Maria Elide Vanutelli ◽  
Raffaella Folgieri

Research suggests that doctors are failing to make use of technologies designed to optimize their decision-making skills in daily clinical activities, despite a proliferation of electronic tools with the potential for decreasing risks of medical and diagnostic errors. This paper addresses this issue by exploring the cognitive basis of medical decision making and its psychosocial context in relation to technology. We then discuss how cognitive-led technologies – in particular, decision support systems and artificial neural networks – may be applied in clinical contexts to improve medical decision making without becoming a substitute for the doctor’s judgment. We identify critical issues and make suggestions regarding future developments.


Author(s):  
Ekaterina Jussupow ◽  
Kai Spohrer ◽  
Armin Heinzl ◽  
Joshua Gawlitza

Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.


2020 ◽  
Vol 176 ◽  
pp. 1703-1712
Author(s):  
Georgy Lebedev ◽  
Eduard Fartushnyi ◽  
Igor Fartushnyi ◽  
Igor Shaderkin ◽  
Herman Klimenko ◽  
...  

2019 ◽  
pp. bmjebm-2019-111247
Author(s):  
David Slawson ◽  
Allen F Shaughnessy

Overdiagnosis and overtreatment—overuse—is gaining wide acceptance as a leading nosocomial intervention in medicine. Not only does overuse create anxiety and diminish patients’ quality of life, in some cases it causes harm to both patients and others not directly involved in clinical care. Reducing overuse begins with the recognition and acceptance of the potential for unintended harm of our best intentions. In this paper, we introduce five cases to illustrate where harm can occur as the result of well-intended healthcare interventions. With this insight, clinicians can learn to appreciate the critical role of probability-based, evidence-informed decision-making in medicine and the need to consider the outcomes for all who may be affected by their actions. Likewise, educators need to evolve medical education and medical decision-making so that it focuses on the hierarchy of evidence and that what ‘ought to work’, based on traditional pathophysiological, disease-focused reasoning, should be subordinate to what ‘does work’.


2020 ◽  
Vol 46 (7) ◽  
pp. 478-481 ◽  
Author(s):  
Joshua James Hatherley

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document