Ethics of Artificial Intelligence
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 18)

H-INDEX

1
(FIVE YEARS 1)

Published By Oxford University Press

9780190905033, 9780190905071

Author(s):  
Susan Schneider

How can we determine if AI is conscious? The chapter begins by illustrating that there are potentially very serious real-world costs to getting facts about AI consciousness wrong. It then proposes a provisional framework for investigating artificial consciousness that involves several tests or markers. One test is the AI Consciousness Test, which challenges an AI with a series of increasingly demanding natural-language interactions. Another test is based on the Integrated Information Theory, developed by Giulio Tononi and others, and considers whether a machine has a high level of “integrated information.” A third test is a Chip Test, where speculatively an individual’s brain is gradually replaced with durable microchips. If this individual being tested continues to report having phenomenal consciousness, the chapter argues that this could be a reason to believe that some machines could have consciousness.


Author(s):  
Wendell Wallach ◽  
Shannon Vallor

Implementing sensitivity to norms, laws, and human values in computational systems has transitioned from philosophical reflection to an actual engineering challenge. The “value alignment” approach to dealing with superintelligent AIs tends to employ computationally friendly concepts such as utility functions, system goals, agent preferences, and value optimizers, which, this chapter argues, do not have intrinsic ethical significance. This chapter considers what may be lost in the excision of intrinsically ethical concepts from the project of engineering moral machines. It argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.


Author(s):  
Jessica Taylor ◽  
Eliezer Yudkowsky ◽  
Patrick LaVictoire ◽  
Andrew Critch

This chapter surveys eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? The chapter focuses on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers. The questions surveyed include the following: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to “not have an overly large impact” or “not have many side effects”? The chapter discusses these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.


Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


Author(s):  
Aaron James

What Keynes called “technological unemployment” is not yet upon us. Many agree that, if or when it is upon us, society will be forced to pay a basic income. This chapter argues that we shouldn’t wait. The chance of mass unemployment is credible. The outcome would be terrible. And a “precautionary basic income” is relatively cheap. So, much like buying a fire extinguisher for one’s home, we should take precautionary action before the risk of technological mass unemployment becomes likely. This is consistent with a cost-benefit analysis, when the benefits of business-as-usual are appropriately discounted. Precautionary action may well cost us nothing in the longer run. But even if it will cost something in forgone growth, the rich world shouldn’t worry, for three reasons: (1) The more we gain in GDP, the less and less it does for our happiness; (2) work for GDP is expensive in time lost; and (3) further GDP gains have less value than comparable security benefits to the less well-off.


Author(s):  
Jean-François Bonnefon ◽  
Azim Shariff ◽  
Iyad Rahwan

This chapter discusses the limits of normative ethics in new moral domains linked to the development of AI. In these new domains, people have the possibility to opt out of using a machine if they do not approve of the ethics that the machine is programmed to follow. In other words, even if normative ethics could determine the best moral programs, these programs would not be adopted (and thus have no positive impact) if they clashed with users’ preferences—a phenomenon that can be called “ethical opt-out.” The chapter then explores various ways in which the field of moral psychology can illuminate public perception of moral AI and inform the regulations of such AI. The chapter’s main focus is on self-driving cars, but it also explores the role of psychological science for the study of other moral algorithms.


Author(s):  
S. Matthew Liao

As AIs acquire greater capacities, the issue of whether AIs would acquire greater moral status becomes salient. This chapter sketches a theory of moral status and considers what kind of moral status an AI could have. Among other things, the chapter argues that AIs that are alive, conscious, or sentient, or those that can feel pain, have desires, and have rational or moral agency should have the same kind of moral status as entities that have the same kind of intrinsic properties. It also proposes that a sufficient condition for an AI to have human-level moral status and be a rightsholder is when an AI has the physical basis for moral agency. This chapter also considers what kind of rights a rightsholding AI could have and how AIs could have greater than human-level moral status.


Author(s):  
Nick Bostrom ◽  
Allan Dafoe ◽  
Carrick Flynn

This chapter considers the speculative prospect of superintelligent AI and its normative implications for governance and global policy. Machine superintelligence would be a transformative development that would present a host of political challenges and opportunities. This chapter identifies a set of distinctive features of this hypothetical policy context, from which it derives a correlative set of policy desiderata (efficiency, allocation, population, and process)—considerations that should be given extra weight in long-term AI policy compared to in other policy contexts. This chapter describes a desiderata “vector field” showing the directional change from a variety of possible normative baselines or policy positions. The focus on directional normative change should make the findings in this chapter relevant to a wide range of actors, although the development of concrete policy options that meet these abstractly formulated desiderata will require further work.


Author(s):  
S. Matthew Liao

This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.


Author(s):  
Eric Schwitzgebel ◽  
Mara Garza

This chapter proposes four policies of ethical design of human-grade AI. Two of the policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, the chapter argues that we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. The other two policies concern respect and freedom. The chapter argues that if we design AI that deserves moral consideration equivalent to that of human beings, then AI should be designed with self-respect and with the freedom to explore values other than those we might impose.


Sign in / Sign up

Export Citation Format

Share Document