scholarly journals Ethics as a Service: A Pragmatic Operationalisation of AI Ethics

Author(s):  
Jessica Morley ◽  
Anat Elhalal ◽  
Francesca Garcia ◽  
Libby Kinsey ◽  
Jakob Mökander ◽  
...  

AbstractAs the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’

This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.


2021 ◽  
Vol 90 (2) ◽  
pp. e513
Author(s):  
Tomasz Piotrowski ◽  
Joanna Kazmierska ◽  
Mirosława Mocydlarz-Adamcewicz ◽  
Adam Ryczkowski

Background. This paper evaluates the status of reporting information related to the usage and ethical issues of artificial intelligence (AI) procedures in clinical trial (CT) papers focussed on radiology issues as well as other (non-trial) original radiology articles (OA). Material and Methods. The evaluation was performed by three independent observers who were, respectively physicist, physician and computer scientist. The analysis was performed for two groups of publications, i.e., for CT and OA. Each group included 30 papers published from 2018 to 2020, published before guidelines proposed by Liu et al. (Nat Med. 2020; 26:1364-1374). The set of items used to catalogue and to verify the ethical status of the AI reporting was developed using the above-mentioned guidelines. Results. Most of the reviewed studies, clearly stated their use of AI methods and more importantly, almost all tried to address relevant clinical questions. Although in most of the studies, patient inclusion and exclusion criteria were presented, the widespread lack of rigorous descriptions of the study design apart from a detailed explanation of the AI approach itself is noticeable. Few of the chosen studies provided information about anonymization of data and the process of secure data sharing. Only a few studies explore the patterns of incorrect predictions by the proposed AI tools and their possible reasons. Conclusion. Results of review support idea of implementation of uniform guidelines for designing and reporting studies with use of AI tools. Such guidelines help to design robust, transparent and reproducible tools for use in real life.


Author(s):  
Nandini Sen

This chapter aims to create new knowledge regarding artificial intelligence (AI) ethics and relevant subjects while reviewing ethical relationship between human beings and AI/robotics and linking between the moral fabric or the ethical issues of AI as used in fictions and films. It carefully analyses how a human being will love robot and vice versa. Here, fictions and films are not just about technology but about their feelings and the nature of bonding between AIs and the human race. Ordinary human beings distrust and then start to like AIs. However, if the AI becomes a rogue as seen in many fictions and films, then the AI is taken down to avoid the destruction of the human beings. Scientists like Turing are champions of robot/AI's feelings. Fictional and movie AIs are developed to keenly watch and comprehend humans. These actions are so close to empathy they amount to consciousness and emotional quotient.


2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


AI & Society ◽  
2021 ◽  
Author(s):  
Bernd Carsten Stahl ◽  
Josephina Antoniou ◽  
Mark Ryan ◽  
Kevin Macnish ◽  
Tilimbe Jiya

AbstractThe ethics of artificial intelligence (AI) is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current understanding of how organisations deal with AI ethics by presenting empirical findings collected using a set of ten case studies and providing an account of the cross-case analysis. The paper reviews the discussion of ethical issues of AI as well as mitigation strategies that have been proposed in the literature. Using this background, the cross-case analysis categorises the organisational responses that were observed in practice. The discussion shows that organisations are highly aware of the AI ethics debate and keen to engage with ethical issues proactively. However, they make use of only a relatively small subsection of the mitigation strategies proposed in the literature. These insights are of importance to organisations deploying or using AI, to the academic AI ethics debate, but maybe most valuable to policymakers involved in the current debate about suitable policy developments to address the ethical issues raised by AI.


2021 ◽  
Vol 27 (2) ◽  
Author(s):  
Mark Ryan ◽  
Josephina Antoniou ◽  
Laurence Brooks ◽  
Tilimbe Jiya ◽  
Kevin Macnish ◽  
...  

AbstractThis study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature.


2021 ◽  
pp. 83-88
Author(s):  
Guglielmo Tamburrini

AbstractIn the fight against pandemics and climate crisis, the zero hunger challenge, the preservation of international peace and stability, and the protection of democratic participation in political decision-making, AI has increasing – and often double-edged – roles to play in connection with ethical issues having a genuinely global dimension. The governance of AI ambivalence in these contexts looms large on both the AI ethics and digital humanism agendas.


Author(s):  
Bryant Walker Smith

This chapter highlights key ethical issues in the use of artificial intelligence in transport by using automated driving as an example. These issues include the tension between technological solutions and policy solutions; the consequences of safety expectations; the complex choice between human authority and computer authority; and power dynamics among individuals, governments, and companies. In 2017 and 2018, the U.S. Congress considered automated driving legislation that was generally supported by many of the larger automated-driving developers. However, this automated-driving legislation failed to pass because of a lack of trust in technologies and institutions. Trustworthiness is much more of an ethical question. Automated vehicles will not be driven by individuals or even by computers; they will be driven by companies acting through their human and machine agents. An essential issue for this field—and for artificial intelligence generally—is how the companies that develop and deploy these technologies should earn people’s trust.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Kathleen Murphy ◽  
Erica Di Ruggiero ◽  
Ross Upshur ◽  
Donald J. Willison ◽  
Neha Malhotra ◽  
...  

Abstract Background Artificial intelligence (AI) has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: What ethical issues have been identified in relation to AI in the field of health, including from a global health perspective? Methods Eight electronic databases were searched for peer reviewed and grey literature published before April 2018 using the concepts of health, ethics, and AI, and their related terms. Records were independently screened by two reviewers and were included if they reported on AI in relation to health and ethics and were written in the English language. Data was charted on a piloted data charting form, and a descriptive and thematic analysis was performed. Results Upon reviewing 12,722 articles, 103 met the predetermined inclusion criteria. The literature was primarily focused on the ethics of AI in health care, particularly on carer robots, diagnostics, and precision medicine, but was largely silent on ethics of AI in public and population health. The literature highlighted a number of common ethical concerns related to privacy, trust, accountability and responsibility, and bias. Largely missing from the literature was the ethics of AI in global health, particularly in the context of low- and middle-income countries (LMICs). Conclusions The ethical issues surrounding AI in the field of health are both vast and complex. While AI holds the potential to improve health and health systems, our analysis suggests that its introduction should be approached with cautious optimism. The dearth of literature on the ethics of AI within LMICs, as well as in public health, also points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.


Sign in / Sign up

Export Citation Format

Share Document