Ethical Governance and Responsibility in Digital Medicine: The Case of Artificial Intelligence

2021 ◽  
pp. 169-190
Author(s):  
Jérôme Béranger
2021 ◽  
pp. 174701612110227
Author(s):  
Christine Hine

There has been considerable debate around the ethical issues raised by data-driven technologies such as artificial intelligence. Ethical principles for the field have focused on the need to ensure that such technologies are used for good rather than harm, that they enshrine principles of social justice and fairness, that they protect privacy, respect human autonomy and are open to scrutiny. While development of such principles is well advanced, there is as yet little consensus on the mechanisms appropriate for ethical governance in this field. This paper examines the prospects for the university ethics committee to undertake effective review of research conducted on data-driven technologies in the university context. Challenges identified include: the relatively narrow focus of university-based ethical review on the human subjects research process and lack of capacity to anticipate downstream impacts; the difficulties of accommodating the complex interplay of academic and commercial interests in the field; and the need to ensure appropriate expertise from both specialists and lay voices. Overall, the challenges identified sharpen appreciation of the need to encourage a joined-up and effective system of ethical governance that fosters an ethical culture rather than replacing ethical reflection with bureaucracy.


Author(s):  
Diego Araiza-Garaygordobil ◽  
Antonio Jordán-Ríos ◽  
Carlos R. Sierra-Fernández ◽  
Luis E. Juárez-Orozco

2020 ◽  
Author(s):  
Giovanni Briganti ◽  
Olivier Le Moine

Artificial intelligence-powered medical technologies are rapidly evolving into applicable solutions for clinical practice. Deep learning algorithms can deal with increasing amounts of data provided by wearables, smartphones and other mobile monitoring sensors in different areas of medicine.Currently, only very specific settings in clinical practice benefit from the application of artificial intelligence, such as the detection of atrial fibrillation, epilepsy seizures, and hypoglycemia, or the diagnosis of disease based on histopathological examination or medical imaging. The implementation of augmented medicine is long-awaited by patients because it allows for a greater autonomy and a more personalized treatment, however, it is met with resistance from physicians which were not prepared for such an evolution of clinical practice.This phenomenon also creates the need to validate these modern tools with traditional clinical trials, debate the educational upgrade of the medical curriculum in light of digital medicine as well as ethical consideration of the ongoing connected monitoring. The aim of this paper is to discuss recent scientific literature and provide a perspective on the benefits, future opportunities and risks of established artificial intelligence applications in clinical practice on physicians, healthcare institutions, medical education and bioethics.


2021 ◽  
pp. medethics-2020-106905
Author(s):  
Soogeun Samuel Lee

The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence (AI)-driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that posits trust should be made on sound epistemological bases and a principle of value-based trust that views trust in an all-things-considered manner. I argue rationally justified trust is largely infeasible in trusting AI due to AI’s complexity and inexplicability. Contrarily, I show how value-based trust is more feasible as it is intuitively used by individuals. Furthermore, it better complies with Principle 1. I therefore conclude this essay by suggesting the Code of Conduct to hold the principle of value-based trust more explicitly.


2018 ◽  
Vol 1 (1) ◽  
Author(s):  
Alexander L. Fogel ◽  
Joseph C. Kvedar

First Monday ◽  
2021 ◽  
Author(s):  
Gry Hasselbalch

This article makes a case for a data interest analysis of artificial intelligence (AI) that explores how different interests in data are empowered or disempowered by design. The article uses the EU High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI as an applied ethics approach to data interests with a human-centric ethical governance framework and accordingly suggests ethical questions that will help resolve conflicts between data interests in AI design


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Sujay Kakarmath ◽  
Andre Esteva ◽  
Rima Arnaout ◽  
Hugh Harvey ◽  
Santosh Kumar ◽  
...  

Abstract Since its inception in 2017, npj Digital Medicine has attracted a disproportionate number of manuscripts reporting on uses of artificial intelligence. This field has matured rapidly in the past several years. There was initial fascination with the algorithms themselves (machine learning, deep learning, convoluted neural networks) and the use of these algorithms to make predictions that often surpassed prevailing benchmarks. As the discipline has matured, individuals have called attention to aberrancies in the output of these algorithms. In particular, criticisms have been widely circulated that algorithmically developed models may have limited generalizability due to overfitting to the training data and may systematically perpetuate various forms of biases inherent in the training data, including race, gender, age, and health state or fitness level (Challen et al. BMJ Qual. Saf. 28:231–237, 2019; O’neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Broadway Book, 2016). Given our interest in publishing the highest quality papers and the growing volume of submissions using AI algorithms, we offer a list of criteria that authors should consider before submitting papers to npj Digital Medicine.


Author(s):  
Alan F. T. Winfield ◽  
Marina Jirotka

This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.


2020 ◽  
Author(s):  
Stuart McLennan ◽  
Andrea Meyer ◽  
Korbinian Schreyer ◽  
Alena Buyx

BACKGROUND Medical students will likely be most impacted by this envisaged move to artificial intelligence (AI) driven digital medicine, and there is a need to better understand their knowledge and views regarding the use of AI technology in medicine. OBJECTIVE This study aimed to examine German medical students´ knowledge and views about AI in medicine. METHODS A cross-sectional survey was conducted in October 2019 with all new medical students at the Ludwig Maximilian University of Munich and the Technical University Munich. This represented approximately 10% of all new medical students in Germany. RESULTS A total of 844 medical students participated (91.9% response rate). Two thirds (64.4%) did not feel well informed about AI in medicine. Just over a half (57.4%) of students thought that AI has useful applications in medicine, particularly in drug research and development (82.5%), less so for clinical uses. Male students were more likely to agree with advantages of AI, and female participants were more likely to be concerned about disadvantages The vast majority of students thought that when AI is used in medicine that it is important that there are legal rules regarding liability (97%) and oversight mechanisms (93.7%), that physicians should be consulted prior to implementation (96.8%), that developers should be able to explain to them the details of the algorithm (95.6%), that algorithms should use representative data (93.9%), and that patients should always be informed when AI is used (93.5%). CONCLUSIONS Medical schools and continuing medical education organisers need to promptly develop programs to ensure that clinicians are able to fully realize the potential of AI technology. It is also important that legal rules and oversight are implemented to ensure that future clinicians are not faced with a workplace where important issues around responsibility are not clearly regulated.


Sign in / Sign up

Export Citation Format

Share Document