Legal issues of Data-driven Administration

2021 ◽  
Vol 65 ◽  
pp. 107-135
Author(s):  
Jiweon Seon
Keyword(s):  
10.2196/24668 ◽  
2021 ◽  
Vol 8 (6) ◽  
pp. e24668
Author(s):  
Piers Gooding ◽  
Timothy Kariotis

Background Uncertainty surrounds the ethical and legal implications of algorithmic and data-driven technologies in the mental health context, including technologies characterized as artificial intelligence, machine learning, deep learning, and other forms of automation. Objective This study aims to survey empirical scholarly literature on the application of algorithmic and data-driven technologies in mental health initiatives to identify the legal and ethical issues that have been raised. Methods We searched for peer-reviewed empirical studies on the application of algorithmic technologies in mental health care in the Scopus, Embase, and Association for Computing Machinery databases. A total of 1078 relevant peer-reviewed applied studies were identified, which were narrowed to 132 empirical research papers for review based on selection criteria. Conventional content analysis was undertaken to address our aims, and this was supplemented by a keyword-in-context analysis. Results We grouped the findings into the following five categories of technology: social media (53/132, 40.1%), smartphones (37/132, 28%), sensing technology (20/132, 15.1%), chatbots (5/132, 3.8%), and miscellaneous (17/132, 12.9%). Most initiatives were directed toward detection and diagnosis. Most papers discussed privacy, mainly in terms of respecting the privacy of research participants. There was relatively little discussion of privacy in this context. A small number of studies discussed ethics directly (10/132, 7.6%) and indirectly (10/132, 7.6%). Legal issues were not substantively discussed in any studies, although some legal issues were discussed in passing (7/132, 5.3%), such as the rights of user subjects and privacy law compliance. Conclusions Ethical and legal issues tend to not be explicitly addressed in empirical studies on algorithmic and data-driven technologies in mental health initiatives. Scholars may have considered ethical or legal matters at the ethics committee or institutional review board stage. If so, this consideration seldom appears in published materials in applied research in any detail. The form itself of peer-reviewed papers that detail applied research in this field may well preclude a substantial focus on ethics and law. Regardless, we identified several concerns, including the near-complete lack of involvement of mental health service users, the scant consideration of algorithmic accountability, and the potential for overmedicalization and techno-solutionism. Most papers were published in the computer science field at the pilot or exploratory stages. Thus, these technologies could be appropriated into practice in rarely acknowledged ways, with serious legal and ethical implications.


2020 ◽  
Author(s):  
Piers Gooding ◽  
Timothy Kariotis

BACKGROUND Uncertainty surrounds the ethical and legal implications of algorithmic and data-driven technologies in the mental health context, including technologies variously characterised as artificial intelligence, machine learning and deep learning. OBJECTIVE We aimed to survey scholarly literature on algorithmic and data-driven technologies used in online mental health interventions with a view to identify the legal and ethical issues raised. METHODS We searched for peer-reviewed literature about algorithmic decision systems in mental healthcare used in online platforms. Scopus, Embase and ACM were searched. 1078 relevant peer-reviewed research studies were identified, which were narrowed to 132 empirical research papers for review based on selection criteria. We thematically analysed the papers to address our aims. RESULTS We grouped the findings into five categories of technology: social media (n=53), smartphones (n=37), sensing technology (n=20), chatbots (n=5), and other/miscellaneous (n=17). Most initiatives were directed toward “detection and diagnosis”. Most papers discussed privacy, principally in terms of respecting research participants” privacy, with relatively little discussion of privacy in context. A small number of studies discussed ethics as an explicit category of concern (n=19). Legal issues were not substantively discussed in any studies, though seven studies noted some legal issues in passing, such as the rights of user-subjects and compliance with relevant privacy and data protection law. CONCLUSIONS Ethics tend not to be explicitly addressed in the broad scholarship on algorithmic and data-driven technologies in online mental health initiatives—even less so legal issues. Scholars may have considered ethical or legal matters at the ethics committee/institutional review board stage of their empirical research but this consideration seldom appears in published material in any detail. We identify several concerns, including the near complete lack of involvement of service users, the scant consideration of ‘algorithmic accountability’, and the potential for over-medicalisation and techno-solutionism. Most papers were published in the computer science field at a pilot or exploratory stage. Thus, these technologies could be appropriated into practice in rarely acknowledged ways, with serious legal and ethical implications.


1975 ◽  
Vol 20 (6) ◽  
pp. 505-506
Author(s):  
HAROLD GRAFF
Keyword(s):  

1988 ◽  
Vol 33 (9) ◽  
pp. 833-833
Author(s):  
No authorship indicated
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document