Artificial Intelligence in Behavioral and Mental Health Care

2016 ◽  
2020 ◽  
Vol 26 (1) ◽  
pp. 90-104
Author(s):  
Lawrence Quill ◽  

Recent advances in Artificial Intelligence (A.I.) and their application within the field of mental health provision raise issues that cross social, economic, and philosophical boundaries. While Therapeutic A.I. promises to disrupt the current provision of mental health services to reach populations without access to adequate mental health care there are risks. This paper addresses the philosophical problems posed by Therapeutic A.I. I suggest that in the absence of legal guidelines there is a need for philosophical guidance that prioritizes the dignity of clients/consumers. To that end, I advance Rosen’s (2012) concept of dignity-as-respectfulness as the most appropriate philosophical principle to guide the application of Therapeutic A.I.


2018 ◽  
Author(s):  
Amelia Fiske ◽  
Peter Henningsen ◽  
Alena Buyx

BACKGROUND Research in embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services. With innovations ranging from ‘virtual psychotherapists’ to social robots in dementia care and autism disorder, to robots for sexual disorders, artificially intelligent virtual and robotic agents are increasingly taking on high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals. In order to enable responsible clinical implementation, ethical and social implications of the increasing use of embodied AI in mental health need to be identified and addressed. OBJECTIVE This paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis, it develops a set of preliminary recommendations on how to address ethical and social challenges in current and future applications of embodied AI. METHODS Based on a thematic literature search and established principles of medical ethics, an analysis of the ethical and social aspects of currently embodied AI applications was conducted across the fields of Psychiatry, Psychology, and Psychotherapy. To enable a comprehensive evaluation, the analysis was structured around the following three steps: assessment of potential benefits; analysis of overarching ethical issues and concerns; discussion of specific ethical and social issues of the interventions. RESULTS From an ethical perspective, important benefits of embodied AI applications in mental health include new modes of treatment, opportunities to engage hard-to-reach populations, better patient response, and freeing up time for physicians. Overarching ethical issues and concerns include: harm prevention and various questions of data ethics; a lack of guidance on development of AI applications, their clinical integration and training of health professionals; ‘gaps’ in ethical and regulatory frameworks; the potential for misuse including using the technologies to replace established services, thereby potentially exacerbating existing health inequalities. Specific challenges identified and discussed in the application of embodied AI include: matters of risk-assessment, referrals, and supervision; the need to respect and protect patient autonomy; the role of non-human therapy; transparency in the use of algorithms; and specific concerns regarding long-term effects of these applications on understandings of illness and the human condition. CONCLUSIONS We argue that embodied AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care. We conclude by indicating areas of future research and developing recommendations for high-priority areas in need of concrete ethical guidance.


2020 ◽  
Vol 6 ◽  
pp. 205520762096617
Author(s):  
Robert Meadows ◽  
Christine Hine ◽  
Eleanor Suddaby

Background Artificial intelligence (AI) is said to be “transforming mental health”. AI-based technologies and technique are now considered to have uses in almost every domain of mental health care: including decision-making, assessment and healthcare management. What remains underexplored is whether/how mental health recovery is situated within these discussions and practices. Method Taking conversational agents as our point of departure, we explore the ways official online materials explain and make sense of chatbots, their imagined functionality and value for (potential) users. We focus on three chatbots for mental health: Woebot, Wysa and Tess. Findings “Recovery” is largely missing as an overt focus across materials. However, analysis does reveal themes that speak to the struggles over practice, expertise and evidence that the concept of recovery articulates. We discuss these under the headings “troubled clinical responsibility”, “extended virtue of (technological) self-care” and “altered ontologies and psychopathologies of time”. Conclusions Ultimately, we argue that alongside more traditional forms of recovery, chatbots may be shaped by, and shaping, an increasingly individualised form of a “personal recovery imperative”.


1996 ◽  
Vol 24 (3) ◽  
pp. 274-275
Author(s):  
O. Lawrence ◽  
J.D. Gostin

In the summer of 1979, a group of experts on law, medicine, and ethics assembled in Siracusa, Sicily, under the auspices of the International Commission of Jurists and the International Institute of Higher Studies in Criminal Science, to draft guidelines on the rights of persons with mental illness. Sitting across the table from me was a quiet, proud man of distinctive intelligence, William J. Curran, Frances Glessner Lee Professor of Legal Medicine at Harvard University. Professor Curran was one of the principal drafters of those guidelines. Many years later in 1991, after several subsequent re-drafts by United Nations (U.N.) Rapporteur Erica-Irene Daes, the text was adopted by the U.N. General Assembly as the Principles for the Protection of Persons with Mental Illness and for the Improvement of Mental Health Care. This was the kind of remarkable achievement in the field of law and medicine that Professor Curran repeated throughout his distinguished career.


2020 ◽  
Author(s):  
Nosheen Akhtar ◽  
Cheryl Forchuk ◽  
Katherine McKay ◽  
Sandra Fisman ◽  
Abraham Rudnick

Sign in / Sign up

Export Citation Format

Share Document