We’re Missing a Moral Framework of Justice in Artificial Intelligence

Author(s):  
Matthew Le Bui ◽  
Safiya Umoja Noble

This chapter assesses the concepts of fairness and bias in artificial intelligence research and interventions. In considering the explosive growth, emergence of, and investment in high-profile AI fairness and ethics interventions within both the academy and industry—alongside the mounting and proliferating calls for the interrogation, regulation, and, in some cases, dismantling and prohibition of AI—it contests and questions the extent to which such remedies can address the original concerns and problems they are designed to address. Indeed, many community organizations are organizing responses and challenging AI used in predictive technologies—facial-recognition software and biometrics technologies—with increasing success. Ultimately, the canon of AI ethics must interrogate and deeply engage with intersectional power structures that work to further consolidate capital in the hands of the elites and that will undergird digital informational systems of inequality: there is no neutral or objective state through which the flows and mechanics of data can be articulated as unbiased or fair.

Subject Exports of surveillance technology. Significance Surveillance tools are increasingly driven by underlying artificial intelligence (AI) systems, which (opaquely) process big data to predict events and outcomes. This is both the risk and benefit of AI: it saves time while potentially skewing the data unbeknown to -- or at the direction of -- its operators, creating opportunities for misuse. With Chinese and Western firms competing for the same security projects, the differences in the underlying ethics of vendors and their systems are under a spotlight. Impacts Facial recognition software will be the most publicly divisive surveillance technology. Ethical standards for surveillance systems will emerge gradually. However, not all Western vendors will adopt them.


This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


Author(s):  
Jessica Morley ◽  
Anat Elhalal ◽  
Francesca Garcia ◽  
Libby Kinsey ◽  
Jakob Mökander ◽  
...  

AbstractAs the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’


Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Elin Trägårdh ◽  
Pablo Borrelli ◽  
Reza Kaboteh ◽  
Tony Gillberg ◽  
Johannes Ulén ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document