scholarly journals Exploring stakeholder attitudes towards AI in clinical practice

2021 ◽  
Vol 28 (1) ◽  
pp. e100450
Author(s):  
Ian A Scott ◽  
Stacy M Carter ◽  
Enrico Coiera

ObjectivesDifferent stakeholders may hold varying attitudes towards artificial intelligence (AI) applications in healthcare, which may constrain their acceptance if AI developers fail to take them into account. We set out to ascertain evidence of the attitudes of clinicians, consumers, managers, researchers, regulators and industry towards AI applications in healthcare.MethodsWe undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. We selected articles relating to non-robotic clinician-facing AI applications used to support healthcare-related tasks or decision-making.ResultsAcross 27 studies, attitudes towards AI applications in healthcare, in general, were positive, more so for those with direct experience of AI, but provided certain safeguards were met. AI applications which automated data interpretation and synthesis were regarded more favourably by clinicians and consumers than those that directly influenced clinical decisions or potentially impacted clinician–patient relationships. Privacy breaches and personal liability for AI-related error worried clinicians, while loss of clinician oversight and inability to fully share in decision-making worried consumers. Both clinicians and consumers wanted AI-generated advice to be trustworthy, while industry groups emphasised AI benefits and wanted more data, funding and regulatory certainty.DiscussionCertain expectations of AI applications were common to many stakeholder groups from which a set of dependencies can be defined.ConclusionStakeholders differ in some but not all of their attitudes towards AI. Those developing and implementing applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.

2020 ◽  
Vol 46 (7) ◽  
pp. 478-481 ◽  
Author(s):  
Joshua James Hatherley

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.


1996 ◽  
Vol 1 (3) ◽  
pp. 175-178 ◽  
Author(s):  
Colin Gordon

Expert systems to support medical decision-making have so far achieved few successes. Current technical developments, however, may overcome some of the limitations. Although there are several theoretical currents in medical artificial intelligence, there are signs of them converging. Meanwhile, decision support systems, which set themselves more modest goals than replicating or improving on clinicians' expertise, have come into routine use in places where an adequate electronic patient record exists. They may also be finding a wider role, assisting in the implementation of clinical practice guidelines. There is, however, still much uncertainty about the kinds of decision support that doctors and other health care professionals are likely to want or accept.


2021 ◽  
Author(s):  
David Reifs ◽  
Ramon Reig Bolaño ◽  
Francesc Garcia Cuyas ◽  
Marta Casals Zorita ◽  
Sergi Grau Carrion

BACKGROUND Chronic ulcers, and especially ulcers affecting the lower extremities and their protracted evolution, are a health problem with significant socio-economic repercussions. The patient's quality of life often deteriorates, leading to serious personal problems for the patient and, in turn, major care challenges for healthcare professionals. Our study proposes a new approach for assisting wound assessment and criticality with an integrated framework based on a Mobile App and a Cloud platform, supporting the practitioner and optimising organisational processes. This framework, called Clinicgram, uses a decision-making support method, such as morphological analysis of wounds and artificial intelligence algorithms for feature classification and a system for matching similar cases via an easily accessible and user-friendly mobile app, and assesses the clinician to choose the best treatment. OBJECTIVE The main objective of this work is to evaluate the impact of the incorporation of Clinicgram, a mobile App and a Cloud platform with Artificial Intelligence algorithms to help the clinician as a decision support system to assess and evaluate correct treatments. Second objective evaluates how the professional can benefit from this technology into the real clinical practice, how it impacts patient care and how the organisation’s resources can be optimised. METHODS Clinicgram application and framework is a non-radiological clinical imaging management tool that is incorporated into clinical practice. The tool will also enable the execution of the different algorithms intended for assessment in this study. With the use of computer vision and supervised learning techniques, different algorithms are implemented to simplify a practitioner's task of assessment and anomaly spotting in clinical cases. Determining the area of interest of the case automatically and using it to assess different wound characteristics such as area calculation and tissue classification, and detecting different signs of infection. An observational and an objective study have been carried out that will allow obtaining clear indicators of the level of usability in clinical practice. RESULTS A total of 2,750 wound pictures were taken by 10 nurses for analysis during the study from January 2018 to November 2021. Objective results have been obtained from the use and management of the application, important feedback from professionals with a score of 5.55 out of 7 according to the mHealth App Usability Questionnaire. It has also been possible to collect the most present type of wound according to Resvech 2.0 of between 6 and 16 points of severity, and highlight the collection of images of between 0 and 16 cm2 of area 88%, with involvement of subcutaneous tissue 53.21%, with the presence of granulated tissue 59.16% and necrotic 30.29% and with a wet wound bed 61.54%. The usage of app to upload samples increase from 31 to 110 samples per month from 2018 to 2021. CONCLUSIONS Our real-world assessment demonstrates the effectiveness and reliability of the wound assessment system, increasing professional efficiency, reducing data collection time during the visit and optimising costs-effectivity in the healthcare organisation by reducing treatment variability. Also, the comfort of the professional and patient. Incorporating a tool such as Clinicgram into the chronic wound assessment and monitoring process adds value, reduction of errors and improves both the clinical practice process time, while also improving decision-making by the professional and consequently having a positive impact on the patient's wound healing process.


2020 ◽  
Author(s):  
Avishek Choudhury

UNSTRUCTURED Objective: The potential benefits of artificial intelligence based decision support system (AI-DSS) from a theoretical perspective are well documented and perceived by researchers but there is a lack of evidence showing its influence on routine clinical practice and how its perceived by care providers. Since the effectiveness of AI systems depends on data quality, implementation, and interpretation. The purpose of this literature review is to analyze the effectiveness of AI-DSS in clinical setting and understand its influence on clinician’s decision making outcome. Materials and Methods: This review protocol follows the Preferred Reporting Items for Systematic Reviews and Meta- Analyses reporting guidelines. Literature will be identified using a multi-database search strategy developed in consultation with a librarian. The proposed screening process consists of a title and abstract scan, followed by a full-text review by two reviewers to determine the eligibility of articles. Studies outlining application of AI based decision support system in a clinical setting and its impact on clinician’s decision making, will be included. A tabular synthesis of the general study details will be provided, as well as a narrative synthesis of the extracted data, organised into themes. Studies solely reporting AI accuracy an but not implemented in a clinical setting to measure its influence on clinical decision making were excluded from further review. Results: We identified 8 eligible studies that implemented AI-DSS in a clinical setting to facilitate decisions concerning prostate cancer, post traumatic stress disorder, cardiac ailment, back pain, and others. Five (62.50%) out of 8 studies reported positive outcome of AI-DSS. Conclusion: The systematic review indicated that AI-enabled decision support systems, when implemented in a clinical setting and used by clinicians might not ensure enhanced decision making. However, there are very limited studies to confirm the claim that AI based decision support system can uplift clinicians decision making abilities.


Author(s):  
Anjali Mullick ◽  
Jonathan Martin

Advance care planning (ACP) is a process of formal decision-making that aims to help patients establish decisions about future care that take effect when they lose capacity. In our experience, guidance for clinicians rarely provides detailed practical advice on how it can be successfully carried out in a clinical setting. This may create a barrier to ACP discussions which might otherwise benefit patients, families and professionals. The focus of this paper is on sharing our experience of ACP as clinicians and offering practical tips on elements of ACP, such as triggers for conversations, communication skills, and highlighting the formal aspects that are potentially involved. We use case vignettes to better illustrate the application of ACP in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document