scholarly journals Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review

2021 ◽  
Vol 0 ◽  
pp. 0-0
Author(s):  
Victoria Tucci ◽  
Joan Saary ◽  
Thomas E. Doyle
Author(s):  
Alexandra D. Kaplan ◽  
Theresa T. Kessler ◽  
J. Christopher Brill ◽  
P. A. Hancock

Objective The present meta-analysis sought to determine significant factors that predict trust in artificial intelligence (AI). Such factors were divided into those relating to (a) the human trustor, (b) the AI trustee, and (c) the shared context of their interaction. Background There are many factors influencing trust in robots, automation, and technology in general, and there have been several meta-analytic attempts to understand the antecedents of trust in these areas. However, no targeted meta-analysis has been performed examining the antecedents of trust in AI. Method Data from 65 articles examined the three predicted categories, as well as the subcategories of human characteristics and abilities, AI performance and attributes, and contextual tasking. Lastly, four common uses for AI (i.e., chatbots, robots, automated vehicles, and nonembodied, plain algorithms) were examined as further potential moderating factors. Results Results showed that all of the examined categories were significant predictors of trust in AI as well as many individual antecedents such as AI reliability and anthropomorphism, among many others. Conclusion Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance. Additionally, we highlight the areas where there is currently no empirical research. Application Findings from this analysis will allow designers to build systems that elicit higher or lower levels of trust, as they require.


2021 ◽  
pp. 084653712110495
Author(s):  
Tong Wu ◽  
Wyanne Law ◽  
Nayaar Islam ◽  
Charlotte J. Yong-Hing ◽  
Supriya Kulkarni ◽  
...  

Purpose: To gauge the level of interest in breast imaging (BI) and determine factors impacting trainees’ decision to pursue this subspecialty. Methods: Canadian radiology residents and medical students were surveyed from November 2020 to February 2021. Training level, actual vs preferred timing of breast rotations, fellowship choices, perceptions of BI, and how artificial intelligence (AI) will impact BI were collected. Chi-square, Fisher’s exact tests and univariate logistic regression were performed to determine the impact of trainees’ perceptions on interest in pursuing BI/women’s imaging (WI) fellowships. Results: 157 responses from 80 radiology residents and 77 medical students were collected. The top 3 fellowship subspecialties desired by residents were BI/WI (36%), abdominal imaging (35%), and interventional radiology (25%). Twenty-five percent of the medical students were unsure due to lack of exposure. The most common reason that trainees found BI unappealing was repetitiveness (20%), which was associated with lack of interest in BI/WI fellowships (OR = 3.9, 95% CI: 1.6-9.5, P = .002). The most common reason residents found BI appealing was procedures (59%), which was associated with interest in BI/WI fellowships (OR, 3.2, 95% CI, 1.2-8.6, P = .02). Forty percent of residents reported an earlier start of their first breast rotation (PGY1-2) would affect their fellowship choice. Conclusion: This study assessed the current level of Canadian trainees’ interest in BI and identified factors that influenced their decisions to pursue BI. Solutions for increased interest include earlier exposure to breast radiology and addressing inadequacies in residency training.


2019 ◽  
Vol 33 (1) ◽  
pp. 19-24 ◽  
Author(s):  
Gurprit K. Randhawa ◽  
Mary Jackson

This article discusses the emerging role of Artificial Intelligence (AI) in the learning and professional development of healthcare professionals. It provides a brief history of AI, current and past applications in healthcare education and training, and discusses why and how health leaders can revolutionize education system practices using AI in healthcare education. It also discusses potential implications of AI on human educators like clinical educators and provides recommendations for health leaders to support the application of AI in the learning and professional development of healthcare professionals.


2019 ◽  
Vol 115 ◽  
pp. 103488 ◽  
Author(s):  
M. Schinkel ◽  
K. Paranjape ◽  
R.S. Nannan Panday ◽  
N. Skyttberg ◽  
P.W.B. Nanayakkara

2018 ◽  
Vol 9 (6) ◽  
pp. 92-98 ◽  
Author(s):  
Haleh Ayatollahi ◽  
Nader Mirani ◽  
Fatemeh Nazari ◽  
Narjes Razavi

2020 ◽  
Vol 27 (3) ◽  
pp. e100175
Author(s):  
Daniel D’Hotman ◽  
Erwin Loh

Background: Suicide poses a significant health burden worldwide. In many cases, people at risk of suicide do not engage with their doctor or community due to concerns about stigmatisation and forced medical treatment; worse still, people with mental illness (who form a majority of people who die from suicide) may have poor insight into their mental state, and not self-identify as being at risk. These issues are exacerbated by the fact that doctors have difficulty in identifying those at risk of suicide when they do present to medical services. Advances in artificial intelligence (AI) present opportunities for the development of novel tools for predicting suicide.Method: We searched Google Scholar and PubMed for articles relating to suicide prediction using artificial intelligence from 2017 onwards.Conclusions: This paper presents a qualitative narrative review of research focusing on two categories of suicide prediction tools: medical suicide prediction and social suicide prediction. Initial evidence is promising: AI-driven suicide prediction could improve our capacity to identify those at risk of suicide, and, potentially, save lives. Medical suicide prediction may be relatively uncontroversial when it pays respect to ethical and legal principles; however, further research is required to determine the validity of these tools in different contexts. Social suicide prediction offers an exciting opportunity to help identify suicide risk among those who do not engage with traditional health services. Yet, efforts by private companies such as Facebook to use online data for suicide prediction should be the subject of independent review and oversight to confirm safety, effectiveness and ethical permissibility.


Sign in / Sign up

Export Citation Format

Share Document