What Do We Call “Thinking” in the Age of Artificial Intelligence and Moral Machines?

2021 ◽  
pp. 202-213
Author(s):  
Anne Alombert
2021 ◽  
pp. 203-216
Author(s):  
Nicholas G. Evans

While the majority of neuroscience research promises novel therapies for treating dementia and post-traumatic stress disorder, among others, a lesser-known branch of neuroscientific research informs the construction of artificial intelligence inspired by human neurophysiology. For those concerned with the normative implications of autonomous weapons systems (AWS), however, a tension arises between the primary attraction of AWS, their theoretic capacity to make better decisions in armed conflict, and the relatively low-hanging fruit of modeling machine intelligence on the very thing that causes humans to make (relatively) bad decisions—the human brain. This chapter examines human cognition as a model for machine intelligence, and some of its implications for AWS development. It first outlines recent neuroscience developments as drivers for advances in artificial intelligence. This chapter then expands on a key distinction for the ethics of AWS: poor normative decisions that are a function of poor judgments given a certain set of inputs, and poor normative decisions that are a function of poor sets of inputs. It argues that given that there are cases in the second category of decisions in which we judge humans to have acted wrongly, we should likewise judge AWS platforms. Further, while an AWS may in principle outperform humans in the former, it is an open question of design whether they can outperform humans in the latter. Finally, this chapter then discusses what this means for the design and control of, and ultimately liability for AWS behavior, and sources of inspiration for the alternate design of AWS platforms.


Conatus ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 177
Author(s):  
Michael Anderson ◽  
Susan Leigh Anderson ◽  
Alkis Gounaris ◽  
George Kosteletos

At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, metaphysical and ethical questions arising by this project, as well as the realistic and pragmatic demands that dominate artificial intelligence and robotics research programs. Most of all, however, it sheds light upon the contribution of Susan and Michael Anderson regarding the introduction and undertaking of a main objective related to the creation of ethical autonomous agents, that will not be based on the “imperfect” patterns of human behavior, or on preloaded hierarchical laws and human-centric values.


2020 ◽  
Author(s):  
Petros Terzis

The debate on the ethics of Artificial Intelligence brought together different stakeholders including but not limited to academics, policymakers, CEOs, activists, workers’ representatives, lobbyists, journalists, and ‘moral machines’. Prominent political institutions crafted principles for the ‘ethical being’ of the AI companies while tech giants were documenting ethics in a series of self-written guidelines. In parallel, a large community started to flourish, focusing on how to technically embed ethical parameters into algorithmic systems. Founded upon the philosophical work of Simone de Beauvoir and JeanPaul Sartre, this paper explores the philosophical antinomies of the ‘AI Ethics’ debate as well as the conceptual disorientation of the ‘fairness discussion’. By bringing the philosophy of existentialism to the dialogue, this paper attempts to challenge the dialectical monopoly of utilitarianism and sheds fresh light on the -already glaring- AI arena. Why is ‘the AI Ethics guidelines’ a futile battle doomed to dangerous abstraction? How this battle can harm our sense of collective freedom? Which is the uncomfortable reality that remains obscured by the smokegas of the ‘AI Ethics’ discussion? And eventually, what’s the alternative? There seems to be a different pathway for discussing and implementing ethics; A pathway that sets the freedom of others at the epicenter of the battle for a sustainable and open to all future.


Author(s):  
David L. Poole ◽  
Alan K. Mackworth

Sign in / Sign up

Export Citation Format

Share Document