scholarly journals A Scoping Review of Algorithmic and Data-Driven Technology in Online Mental Healthcare: What is Underway and What Place for Ethics and Law? (Preprint)

2020 ◽  
Author(s):  
Piers Gooding ◽  
Timothy Kariotis

BACKGROUND Uncertainty surrounds the ethical and legal implications of algorithmic and data-driven technologies in the mental health context, including technologies variously characterised as artificial intelligence, machine learning and deep learning. OBJECTIVE We aimed to survey scholarly literature on algorithmic and data-driven technologies used in online mental health interventions with a view to identify the legal and ethical issues raised. METHODS We searched for peer-reviewed literature about algorithmic decision systems in mental healthcare used in online platforms. Scopus, Embase and ACM were searched. 1078 relevant peer-reviewed research studies were identified, which were narrowed to 132 empirical research papers for review based on selection criteria. We thematically analysed the papers to address our aims. RESULTS We grouped the findings into five categories of technology: social media (n=53), smartphones (n=37), sensing technology (n=20), chatbots (n=5), and other/miscellaneous (n=17). Most initiatives were directed toward “detection and diagnosis”. Most papers discussed privacy, principally in terms of respecting research participants” privacy, with relatively little discussion of privacy in context. A small number of studies discussed ethics as an explicit category of concern (n=19). Legal issues were not substantively discussed in any studies, though seven studies noted some legal issues in passing, such as the rights of user-subjects and compliance with relevant privacy and data protection law. CONCLUSIONS Ethics tend not to be explicitly addressed in the broad scholarship on algorithmic and data-driven technologies in online mental health initiatives—even less so legal issues. Scholars may have considered ethical or legal matters at the ethics committee/institutional review board stage of their empirical research but this consideration seldom appears in published material in any detail. We identify several concerns, including the near complete lack of involvement of service users, the scant consideration of ‘algorithmic accountability’, and the potential for over-medicalisation and techno-solutionism. Most papers were published in the computer science field at a pilot or exploratory stage. Thus, these technologies could be appropriated into practice in rarely acknowledged ways, with serious legal and ethical implications.

10.2196/24668 ◽  
2021 ◽  
Vol 8 (6) ◽  
pp. e24668
Author(s):  
Piers Gooding ◽  
Timothy Kariotis

Background Uncertainty surrounds the ethical and legal implications of algorithmic and data-driven technologies in the mental health context, including technologies characterized as artificial intelligence, machine learning, deep learning, and other forms of automation. Objective This study aims to survey empirical scholarly literature on the application of algorithmic and data-driven technologies in mental health initiatives to identify the legal and ethical issues that have been raised. Methods We searched for peer-reviewed empirical studies on the application of algorithmic technologies in mental health care in the Scopus, Embase, and Association for Computing Machinery databases. A total of 1078 relevant peer-reviewed applied studies were identified, which were narrowed to 132 empirical research papers for review based on selection criteria. Conventional content analysis was undertaken to address our aims, and this was supplemented by a keyword-in-context analysis. Results We grouped the findings into the following five categories of technology: social media (53/132, 40.1%), smartphones (37/132, 28%), sensing technology (20/132, 15.1%), chatbots (5/132, 3.8%), and miscellaneous (17/132, 12.9%). Most initiatives were directed toward detection and diagnosis. Most papers discussed privacy, mainly in terms of respecting the privacy of research participants. There was relatively little discussion of privacy in this context. A small number of studies discussed ethics directly (10/132, 7.6%) and indirectly (10/132, 7.6%). Legal issues were not substantively discussed in any studies, although some legal issues were discussed in passing (7/132, 5.3%), such as the rights of user subjects and privacy law compliance. Conclusions Ethical and legal issues tend to not be explicitly addressed in empirical studies on algorithmic and data-driven technologies in mental health initiatives. Scholars may have considered ethical or legal matters at the ethics committee or institutional review board stage. If so, this consideration seldom appears in published materials in applied research in any detail. The form itself of peer-reviewed papers that detail applied research in this field may well preclude a substantial focus on ethics and law. Regardless, we identified several concerns, including the near-complete lack of involvement of mental health service users, the scant consideration of algorithmic accountability, and the potential for overmedicalization and techno-solutionism. Most papers were published in the computer science field at the pilot or exploratory stages. Thus, these technologies could be appropriated into practice in rarely acknowledged ways, with serious legal and ethical implications.


2020 ◽  
Vol 51 (9) ◽  
pp. 683-701
Author(s):  
Diana Cagliero

This article explores ethical issues raised by Primary Care Physicians (PCPs) when diagnosing depression and caring for cross-cultural patients. This study was conducted in three primary care clinics within a major metropolitan area in the Southeastern United States. The PCPs were from a variety of ethnocultural backgrounds including South Asian, Hispanic, East Asian and Caucasian. While medical education training and guidelines aim to teach physicians about the nuances of cross-cultural patient interaction, PCPs report that past experiences guide them in navigating cross-cultural conversations and patient care. In this study, semi-structured interviews were conducted with seven PCPs which were transcribed and underwent thematic analysis to explore how patients’ cultural backgrounds and understanding of depression affected PCPs’ reasoning and diagnosing of depression in patients from different cultural backgrounds. Ethical issues that arose included: limiting treatment options, expressing a patient’s mental health diagnosis in a biomedical sense to reduce stigma, and somatization of mental health symptoms. Ethical implications, such as lack of autonomy, unnecessary testing, and the possible misuse of healthcare resources are discussed.


2017 ◽  
Vol 33 (6) ◽  
Author(s):  
Rosana Teresa Onocko-Campos ◽  
Alberto Rodolfo Giovanello Díaz ◽  
Catarina Magalhães Dahl ◽  
Erotildes Maria Leal ◽  
Octavio Domont de Serpa Junior

Abstract: This study addresses the practical, methodological and ethical challenges that were found in three studies that used focus groups with people with severe mental illness, in the context of community mental health services in Brazil. Focus groups are a powerful tool in health research that need to be better discussed in research with people with severe mental illness, in the context of community mental health facilities. This study is based on the authors’ experience of conducting and analyzing focus groups in three different cities - Campinas, Rio de Janeiro and Salvador - between 2006-2010. The implementation of focus groups with people with severe mental illness is discussed in the following categories; planning, group design, sampling, recruitment, group interview guides, and conduction. The importance of connecting mental healthcare providers as part of the research context is emphasized. Ethical issues and challenges are highlighted, as well as the establishment of a sensitive and empathic group atmosphere, wherein mutual respect can facilitate interpersonal relations and enable people diagnosed with severe mental illness to make sense of the experience. We emphasize the relevance of the interaction between clinical and research teams in order to create collaborative work, achieve inquiry aims, and elicit narratives of mental health users and professionals.


2021 ◽  
pp. 025371762110310
Author(s):  
Bhavika Vajawat ◽  
Prakyath R. Hegde ◽  
Barikar C. Malathesh ◽  
Channaveerachari Naveen Kumar ◽  
Palanimuthu T. Sivakumar ◽  
...  

There is an increased risk of debilitating illnesses that often have no curative treatment with aging. The mainstay of treatment in many such conditions is palliative care: a holistic approach focused on preventing and relieving physical, psychosocial, legal, ethical, and spiritual problems. It involves the facilitation of end-of-life care decisions aimed at relieving distress and improving quality of life. In this article, the authors discuss the role of mental health professionals in legal issues related to palliative care in the elderly around decision-making, right to autonomy, euthanasia, and advanced directive. The cognitive decline associated with aging and mental health issues in the palliative care setting of an individual such as dementia, depression, and hopelessness, and impact on the family members like burnout may influence the overall capacity of that individual to make decisions about their treatment. While an individual has a right to self-determination and autonomy, withholding or withdrawing treatment has many legal and ethical implications, more so in those with incapacity, especially in India due to the absence of uniform legislation. The decision to withhold or withdraw treatment might be a restrictive choice due to limited options in a setting with a lack of palliative care options, poor psychosocial support, nonaddress of mental health issues, and lack of awareness. As the right to health is a constitutional right, and the right to mental health is legally binding under Section 18 of the Mental Health Care Act 2017, systematic efforts should be made to scale up services and reach out to those in need.


2021 ◽  
Vol 66 (Special Issue) ◽  
pp. 121-121
Author(s):  
David M. Lyreskog ◽  
◽  
Gabriela Pavarini ◽  
Edward Jacobs ◽  
Vanessa Bennett ◽  
...  

"Across the globe the phenomenon of digital phenotyping – the collection and analysis of digital data for mental health – is growing increasingly popular within the education sector. Schools enter collaborations with health care providers, often with the aim to support young people and to reduce the risk for severe mental health challenges, self-harm, and suicide. In developing technologies for these purposes, algorithms and artificial intelligence (broadly construed) could be utilized to provide as rich and accurate data as possible. The data can then be used to flag up at-risk individuals within the system. Despite the increasing interest in digital mental health tools in many educational systems, there has been remarkably little written about the ethical issues that accompany the emergence of digital phenotyping. Arguably more alarming is that almost no research has been conducted on the acceptability and ethics of these technologies in stakeholder populations: we have not asked young people about their values in this context. In this paper, we present results from a large quantitative study from the UK, showing what young people value and choose in scenarios involving digital phenotyping in schools. We highlight clear discrepancies between what young people value – and how they conceptualize those values – and how the literature describes the ethical implications of related technologies in schools. We argue that policymakers and ethicists urgently need to learn to recognize and respect the moral boundaries of young people. "


2017 ◽  
Vol 5 (3) ◽  
pp. 10
Author(s):  
Mohammed Hamdan Alshammari ◽  
Rizal Angelo Natoza Grande ◽  
Ghedeir M. Alshammari

Psychiatric commitment has been a central subject in mental health care. It has been surrounded with ethical and legal issues basically focusing on individual’s autonomy and legal rights. This review aimed to explore the outcomes of psychiatric commitment on the lives of the individuals subject to this intervention despite these legal and ethical issues. Outcomes of involuntary commitment were leaning more towards its risks on individuals but poses benefits on health system and society. Therefore, more qualitative and quantitative studies focusing on benefits of psychiatric commitment are needed.


2021 ◽  
Vol 27 (2) ◽  
Author(s):  
Olga Chivilgina ◽  
Bernice S. Elger ◽  
Fabrice Jotterand

Abstract While the implementation of digital technology in psychiatry appears promising, there is an urgent need to address the implications of the absence of ethical design in the early development of such technologies. Some authors have noted the gap between technology development and ethical analysis and have called for an upstream examination of the ethical issues raised by digital technologies. In this paper, we address this suggestion, particularly in relation to digital healthcare technologies for patients with schizophrenia spectrum disorders. The introduction of digital technologies in psychiatry offers a broad spectrum of diagnostic and treatment options tailored to the health needs and goals of patients’ care. These technologies include wearable devices, smartphone applications for high-immersive virtual realities, smart homes, telepsychiatry and messaging systems for patients in rural areas. The availability of these technologies could increase access to mental health services and improve the diagnostics of mental disorders. Additional Instruction Abstract In this descriptive review, we systematize ethical concerns about digital technologies for mental health with a particular focus on individuals suffering from schizophrenia. There are many unsolved dilemmas and conflicts of interest in the implementation of these technologies, such as (1) the lack of evidence on efficacy and impact on self-perception; (2) the lack of clear standards for the safety of their daily implementation; (3) unclear roles of technology and a shift in the responsibilities of all parties; (4) no guarantee of data confidentiality; and (5) the lack of a user-centered design that meets the particular needs of patients with schizophrenia. mHealth can improve care in psychiatry and make mental healthcare services more efficient and personalized while destigmatizing mental health disorders. To ensure that these technologies will benefit people with mental health disorders, we need to heighten sensitivity to ethical issues among mental healthcare specialists, health policy makers, software developers, patients themselves and their proxies. Additionally, we need to develop frameworks for furthering sustainable development in the digital technologies industry and for the responsible usage of such technologies for patients with schizophrenia in the clinical setting. We suggest that digital technology in psychiatry, particularly for schizophrenia and other serious mental health disorders, should be integrated into treatment with professional supervision rather than as a self-treatment tool.


2018 ◽  
Author(s):  
Amelia Fiske ◽  
Peter Henningsen ◽  
Alena Buyx

BACKGROUND Research in embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services. With innovations ranging from ‘virtual psychotherapists’ to social robots in dementia care and autism disorder, to robots for sexual disorders, artificially intelligent virtual and robotic agents are increasingly taking on high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals. In order to enable responsible clinical implementation, ethical and social implications of the increasing use of embodied AI in mental health need to be identified and addressed. OBJECTIVE This paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis, it develops a set of preliminary recommendations on how to address ethical and social challenges in current and future applications of embodied AI. METHODS Based on a thematic literature search and established principles of medical ethics, an analysis of the ethical and social aspects of currently embodied AI applications was conducted across the fields of Psychiatry, Psychology, and Psychotherapy. To enable a comprehensive evaluation, the analysis was structured around the following three steps: assessment of potential benefits; analysis of overarching ethical issues and concerns; discussion of specific ethical and social issues of the interventions. RESULTS From an ethical perspective, important benefits of embodied AI applications in mental health include new modes of treatment, opportunities to engage hard-to-reach populations, better patient response, and freeing up time for physicians. Overarching ethical issues and concerns include: harm prevention and various questions of data ethics; a lack of guidance on development of AI applications, their clinical integration and training of health professionals; ‘gaps’ in ethical and regulatory frameworks; the potential for misuse including using the technologies to replace established services, thereby potentially exacerbating existing health inequalities. Specific challenges identified and discussed in the application of embodied AI include: matters of risk-assessment, referrals, and supervision; the need to respect and protect patient autonomy; the role of non-human therapy; transparency in the use of algorithms; and specific concerns regarding long-term effects of these applications on understandings of illness and the human condition. CONCLUSIONS We argue that embodied AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care. We conclude by indicating areas of future research and developing recommendations for high-priority areas in need of concrete ethical guidance.


2017 ◽  
Vol 28 (3) ◽  
pp. 446-455 ◽  
Author(s):  
Genevieve Creighton ◽  
John L. Oliffe ◽  
Olivier Ferlatte ◽  
Joan Bottorff ◽  
Alex Broom ◽  
...  

As photovoice continues to grow as a method for researching health and illness, there is a need for rigorous discussions about ethical considerations. In this article, we discuss three key ethical issues arising from a recent photovoice study investigating men’s depression and suicide. The first issue, indelible images, details the complexity of consent and copyright when participant-produced photographs are shown at exhibitions and online where they can be copied and disseminated beyond the original scope of the research. The second issue, representation, explores the ethical implications that can arise when participants and others have discordant views about the deceased. The third, vicarious trauma, offers insights into the potenial for triggering mental health issues among researchers and viewers of the participant-produced photographs. Through a discussion of these ethical issues, we offer suggestions to guide the work of health researchers who use, or are considering the use of, photovoice.


2017 ◽  
Vol 41 (S1) ◽  
pp. S612-S612
Author(s):  
R. Nagpal ◽  
A.K. Mital

Mental health professionals had always yearned for an intervention, which was restricted to them alone, was safe and had a commercial potential. Narco analysis or chemical hypnosis with or without the supervision of an anesthetist presented such an opportunity in India's largely poorly regulated medical practice. The turning point however was the unrestricted use of narco analysis for forensic reasons often against the will of the recipient that caught the attention of the judiciary. Professionals in candid confessions spoke of the tool replacing normal polite enquiries and unnecessary voyeuristic information being fettered out. Anecdotal evidence suggested police resorting to this tool without client consent or judicial permission. A series of fiats after searching enquiry on the statute has led to complete disarray. The legal issues have relegated the ethical issues of consent, the usefulness of “forced” information, the aftermath of “forced” information to the backburner. Currently, the tool is regulated by the judiciary and selectively applied with consent. In the clinical setting, it is fast disappearing.Disclosure of interestThe authors have not supplied their declaration of competing interest.


Sign in / Sign up

Export Citation Format

Share Document