Clinical, Legal, and Ethical Aspects of Artificial Intelligence–Assisted Conversational Agents in Health Care

JAMA ◽  
2020 ◽  
Vol 324 (6) ◽  
pp. 552 ◽  
Author(s):  
John D. McGreevey ◽  
C. William Hanson ◽  
Ross Koppel
2020 ◽  
Author(s):  
Madison Milne-Ives ◽  
Caroline de Cock ◽  
Ernest Lim ◽  
Melissa Harper Shehadeh ◽  
Nick de Pennington ◽  
...  

BACKGROUND The high demand for health care services and the growing capability of artificial intelligence have led to the development of conversational agents designed to support a variety of health-related activities, including behavior change, treatment support, health monitoring, training, triage, and screening support. Automation of these tasks could free clinicians to focus on more complex work and increase the accessibility to health care services for the public. An overarching assessment of the acceptability, usability, and effectiveness of these agents in health care is needed to collate the evidence so that future development can target areas for improvement and potential for sustainable adoption. OBJECTIVE This systematic review aims to assess the effectiveness and usability of conversational agents in health care and identify the elements that users like and dislike to inform future research and development of these agents. METHODS PubMed, Medline (Ovid), EMBASE (Excerpta Medica dataBASE), CINAHL (Cumulative Index to Nursing and Allied Health Literature), Web of Science, and the Association for Computing Machinery Digital Library were systematically searched for articles published since 2008 that evaluated unconstrained natural language processing conversational agents used in health care. EndNote (version X9, Clarivate Analytics) reference management software was used for initial screening, and full-text screening was conducted by 1 reviewer. Data were extracted, and the risk of bias was assessed by one reviewer and validated by another. RESULTS A total of 31 studies were selected and included a variety of conversational agents, including 14 chatbots (2 of which were voice chatbots), 6 embodied conversational agents (3 of which were interactive voice response calls, virtual patients, and speech recognition screening systems), 1 contextual question-answering agent, and 1 voice recognition triage system. Overall, the evidence reported was mostly positive or mixed. Usability and satisfaction performed well (27/30 and 26/31), and positive or mixed effectiveness was found in three-quarters of the studies (23/30). However, there were several limitations of the agents highlighted in specific qualitative feedback. CONCLUSIONS The studies generally reported positive or mixed evidence for the effectiveness, usability, and satisfactoriness of the conversational agents investigated, but qualitative user perceptions were more mixed. The quality of many of the studies was limited, and improved study design and reporting are necessary to more accurately evaluate the usefulness of the agents in health care and identify key areas for improvement. Further research should also analyze the cost-effectiveness, privacy, and security of the agents. INTERNATIONAL REGISTERED REPORT RR2-10.2196/16934


2019 ◽  
Author(s):  
John Powell

UNSTRUCTURED Over the next decade, one issue which will dominate sociotechnical studies in health informatics is the extent to which the promise of artificial intelligence in health care will be realized, along with the social and ethical issues which accompany it. A useful thought experiment is the application of the Turing test to user-facing artificial intelligence systems in health care (such as chatbots or conversational agents). In this paper I argue that many medical decisions require value judgements and the doctor-patient relationship requires empathy and understanding to arrive at a shared decision, often handling large areas of uncertainty and balancing competing risks. Arguably, medicine requires wisdom more than intelligence, artificial or otherwise. Artificial intelligence therefore needs to supplement rather than replace medical professionals, and identifying the complementary positioning of artificial intelligence in medical consultation is a key challenge for the future. In health care, artificial intelligence needs to pass the implementation game, not the imitation game.


10.2196/20346 ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. e20346
Author(s):  
Madison Milne-Ives ◽  
Caroline de Cock ◽  
Ernest Lim ◽  
Melissa Harper Shehadeh ◽  
Nick de Pennington ◽  
...  

Background The high demand for health care services and the growing capability of artificial intelligence have led to the development of conversational agents designed to support a variety of health-related activities, including behavior change, treatment support, health monitoring, training, triage, and screening support. Automation of these tasks could free clinicians to focus on more complex work and increase the accessibility to health care services for the public. An overarching assessment of the acceptability, usability, and effectiveness of these agents in health care is needed to collate the evidence so that future development can target areas for improvement and potential for sustainable adoption. Objective This systematic review aims to assess the effectiveness and usability of conversational agents in health care and identify the elements that users like and dislike to inform future research and development of these agents. Methods PubMed, Medline (Ovid), EMBASE (Excerpta Medica dataBASE), CINAHL (Cumulative Index to Nursing and Allied Health Literature), Web of Science, and the Association for Computing Machinery Digital Library were systematically searched for articles published since 2008 that evaluated unconstrained natural language processing conversational agents used in health care. EndNote (version X9, Clarivate Analytics) reference management software was used for initial screening, and full-text screening was conducted by 1 reviewer. Data were extracted, and the risk of bias was assessed by one reviewer and validated by another. Results A total of 31 studies were selected and included a variety of conversational agents, including 14 chatbots (2 of which were voice chatbots), 6 embodied conversational agents (3 of which were interactive voice response calls, virtual patients, and speech recognition screening systems), 1 contextual question-answering agent, and 1 voice recognition triage system. Overall, the evidence reported was mostly positive or mixed. Usability and satisfaction performed well (27/30 and 26/31), and positive or mixed effectiveness was found in three-quarters of the studies (23/30). However, there were several limitations of the agents highlighted in specific qualitative feedback. Conclusions The studies generally reported positive or mixed evidence for the effectiveness, usability, and satisfactoriness of the conversational agents investigated, but qualitative user perceptions were more mixed. The quality of many of the studies was limited, and improved study design and reporting are necessary to more accurately evaluate the usefulness of the agents in health care and identify key areas for improvement. Further research should also analyze the cost-effectiveness, privacy, and security of the agents. International Registered Report Identifier (IRRID) RR2-10.2196/16934


10.2196/16222 ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. e16222 ◽  
Author(s):  
John Powell

Over the next decade, one issue which will dominate sociotechnical studies in health informatics is the extent to which the promise of artificial intelligence in health care will be realized, along with the social and ethical issues which accompany it. A useful thought experiment is the application of the Turing test to user-facing artificial intelligence systems in health care (such as chatbots or conversational agents). In this paper I argue that many medical decisions require value judgements and the doctor-patient relationship requires empathy and understanding to arrive at a shared decision, often handling large areas of uncertainty and balancing competing risks. Arguably, medicine requires wisdom more than intelligence, artificial or otherwise. Artificial intelligence therefore needs to supplement rather than replace medical professionals, and identifying the complementary positioning of artificial intelligence in medical consultation is a key challenge for the future. In health care, artificial intelligence needs to pass the implementation game, not the imitation game.


2020 ◽  
Vol 6 ◽  
pp. 205520762096617
Author(s):  
Robert Meadows ◽  
Christine Hine ◽  
Eleanor Suddaby

Background Artificial intelligence (AI) is said to be “transforming mental health”. AI-based technologies and technique are now considered to have uses in almost every domain of mental health care: including decision-making, assessment and healthcare management. What remains underexplored is whether/how mental health recovery is situated within these discussions and practices. Method Taking conversational agents as our point of departure, we explore the ways official online materials explain and make sense of chatbots, their imagined functionality and value for (potential) users. We focus on three chatbots for mental health: Woebot, Wysa and Tess. Findings “Recovery” is largely missing as an overt focus across materials. However, analysis does reveal themes that speak to the struggles over practice, expertise and evidence that the concept of recovery articulates. We discuss these under the headings “troubled clinical responsibility”, “extended virtue of (technological) self-care” and “altered ontologies and psychopathologies of time”. Conclusions Ultimately, we argue that alongside more traditional forms of recovery, chatbots may be shaped by, and shaping, an increasingly individualised form of a “personal recovery imperative”.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


2019 ◽  
Author(s):  
Jose Hamilton Vargas ◽  
Thiago Antonio Marafon ◽  
Diego Fernando Couto ◽  
Ricardo Giglio ◽  
Marvin Yan ◽  
...  

BACKGROUND Mental health conditions, including depression and anxiety disorders, are significant global concerns. Many people with these conditions don't get the help they need because of the high costs of medical treatment and the stigma attached to seeking help. Digital technologies represent a viable solution to these challenges. However, these technologies are often characterized by relatively low adherence and their effectiveness largely remains empirical unverified. While digital technologies may represent a viable solution for this persisting problem, they often lack empirical support for their effectiveness and are characterized by relatively low adherence. Conversational agents using artificial intelligence capabilities have the potential to offer a cost-effective, low-stigma and engaging way of getting mental health care. OBJECTIVE The objective of this study was to evaluate the feasibility, acceptability, and effectiveness of Youper, a mobile application that utilizes a conversational interface and artificial intelligence capabilities to deliver cognitive behavioral therapy-based interventions to reduce symptoms of depression and anxiety in adults. METHODS 1,012 adults with symptoms of depression and anxiety participated in a real-world setting study, entirely remotely, unguided and with no financial incentives, over an 8-week period. Participants completed digital versions of the 9-item Patient Health Questionnaire (PHQ-9) and the 7-item Generalized Anxiety Disorder scale (GAD-7) at baseline, 2, 4, and 8 weeks. RESULTS After the eight-week study period, depression (PHQ-9) scores of participants decreased by 48% while anxiety (GAD-7) scores decreased by 43%. The RCI was outside 2 standard deviations for 93.0% of the individuals in the PHQ-9 assessment and 90.7% in the GAD-7 assessment. Participants were on average 24.79 years old (SD 7.61) and 77% female. On average, participants interacted with Youper 0.9 (SD 1.56) times per week. CONCLUSIONS Results suggest that Youper is a feasible, acceptable, and effective intervention for adults with depression and anxiety. CLINICALTRIAL Since this study involved a nonclinical population, it wasn't registered in a public trials registry.


2021 ◽  
Vol 11 (1) ◽  
pp. 32
Author(s):  
Oliwia Koteluk ◽  
Adrian Wartecki ◽  
Sylwia Mazurek ◽  
Iga Kołodziejczak ◽  
Andrzej Mackiewicz

With an increased number of medical data generated every day, there is a strong need for reliable, automated evaluation tools. With high hopes and expectations, machine learning has the potential to revolutionize many fields of medicine, helping to make faster and more correct decisions and improving current standards of treatment. Today, machines can analyze, learn, communicate, and understand processed data and are used in health care increasingly. This review explains different models and the general process of machine learning and training the algorithms. Furthermore, it summarizes the most useful machine learning applications and tools in different branches of medicine and health care (radiology, pathology, pharmacology, infectious diseases, personalized decision making, and many others). The review also addresses the futuristic prospects and threats of applying artificial intelligence as an advanced, automated medicine tool.


Sign in / Sign up

Export Citation Format

Share Document