Machine Law, Ethics, and Morality in the Age of Artificial Intelligence - Advances in Human and Social Aspects of Technology
Latest Publications


TOTAL DOCUMENTS

12
(FIVE YEARS 12)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781799848943, 9781799848950

Author(s):  
Elias Moser

Recently, economic studies on labor market developments have indicated that there is a potential threat of technological mass unemployment. Both smart robotics and information technology may perform a broad range of tasks that today are fulfilled by human labor. This development could lead to vast inequalities. Proponents of an unconditional basic income have, therefore, employed this scenario to argue for their cause. In this chapter, the author argues that, although a basic income might be a valid answer to the challenge of technological unemployment, it fails to account for some ethical problems specific to future expectations of mass unemployment. The author introduces the proposal of an unconditional basic capital and shows how it can address these problems adequately and avoid objections against a basic income. However, the basic capital proposal cannot replace all redistributive social policies. It has to be interpreted as a supplement to either a basic income or more traditional redistributive policies.


Author(s):  
Atsuhide Ito

The chapter observes the distinction between the mechanical and the machinic, and moves beyond the metaphors of android (Metropolis), or cyborg (Donna Haraway), and considers how the machinic has brought new cognitive patterns for human subjects to interact with their environment and others. Artists' dislocation from the central agent of production has opened passages for the posthuman mode of production. Consequently, the machine has become an integral part of artwork and of the artist. Contrary to this development, some artists retain the machine's materiality as a form of Other. The chapter argues that the machine remains as a form of externalization of the Other within the human subject.


Author(s):  
Mandy Goram ◽  
Dirk Veiel

Artificially intelligent systems should make users' lives easier and support them in complex decisions or even make these decisions completely autonomously. However, at the time of writing, the processes and decisions in an intelligent system are usually not transparent for users. They do not know which data are used, for which purpose, and with what consequences. There is simply a lack of transparency, which is important for trust in intelligent systems. Transparency and traceability of decisions is usually subordinated to performance and accuracy in AI development, or sometimes it plays no role at all. In this chapter, the authors describe what intelligent systems are and explain how users can be supported in specific situations using a context-based adaptive system. In this context, the authors describe the challenges and problems of intelligent systems in creating transparency for users and supporting their sovereignty. The authors then show which ethical and legal requirements intelligent systems have to meet and how existing approaches respond to them.


Author(s):  
Jill Anne Morris

This chapter re-introduces the idea of roller coasters as moral machines and morality mechanisms, as they were designed to rid mankind of immoral entertainment, and traces their ability to spread American culture via themed entertainment from World's Fairs to Disneyland and beyond. It features an analysis of two Chinese themed rides, one of which has been developed with American cultural constructs and one of which begins to develop a new form of Chinese historical theme park. Through these examples, it suggests the potential for themed amusements to spread not just American morality and culture, but to provide sites of cultural exchange.


Author(s):  
Steven Umbrello

The value sensitive design (VSD) approach to designing emerging technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD's principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, policies, and social norms engage within VSD practices, similarly, how the interactive nature of the VSD approach can, in turn, influence those directives. This is exacerbated when considering machine ethics policy that has global consequences outside their development spheres. This chapter begins with the VSD approach and aims to determine how policies come to influence how values can be managed within VSD practices. It shows that the interactional nature of VSD permits and encourages existing policies to be integrated early on and throughout the design process.


Author(s):  
Tobias Holstein ◽  
Gordana Dodig-Crnkovic ◽  
Patrizio Pelliccione

Research on self-driving cars is transdisciplinary and its different aspects have attracted interest in general public debates as well as among specialists. To this day, ethical discourses are dominated by the Trolley Problem, a hypothetical ethical dilemma that is by construction unsolvable. It obfuscates much bigger real-world ethical challenges in the design, development, and operation of self-driving cars. The authors propose a systematic approach that connects processes, components, systems, and stakeholders to analyze the real-world ethical challenges for the ecology of socio-technological system of self-driving cars. They take a closer look at the regulative instruments, standards, design, and implementations of components, systems, and services, and they present practical social and ethical challenges that must be met and that imply novel expectations for engineering in car industry.


Author(s):  
Michael Laakasuo ◽  
Jukka R. I. Sundvall ◽  
Anton Berg ◽  
Marianna Drosinou ◽  
Volo Herzon ◽  
...  

This is the first of two chapters introducing the moral psychology of robots and transhumanism. Evolved moral cognition and the human conceptual system has naturally embedded difficulties in coping with the new moral challenges brought on by emerging future technologies. The reviewed literature outlines our contemporary understanding based on evolutionary psychology of humans as cognitive organisms. The authors then give a skeletal outline of moral psychology. These fields together suggest that there are many innate and cultural mechanisms which influence how we understand technology and have blind spots in recognizing the moral issues related to them. They discuss human tool use and cognitive categories and show how tools have shaped our evolution. The first part closes by introducing a new concept: the new ontological category (NOC i.e. robots and AI), which did not exist in our evolution. They explain how the NOC is fundamentally confounding for our moral cognitive machinery. In part two, they apply the background provided here on recent empirical studies in the moral psychology of robotics and transhumanism.


Author(s):  
Jonas Holst

Taking its starting point in a discussion of the concept of intelligence, the chapter develops a philosophical understanding of ethical rationality and discusses its role and implications for two ethical problems within AI: Firstly, the so-called “black box problem,” which is widely discussed in the AI community, and secondly, another more complex one which will be addressed as the “Tin Man problem.” The first problem has to do with opacity, bias, and explainability in the design and development of advanced machine learning systems, such as artificial neural networks, whereas the second problem is more directly associated with the prospect for humans and AI of becoming full ethical agents. Based on Aristotelian virtue ethics, it will be argued that intelligence in human and artificial forms should approximate ethical rationality, which entails a well-balanced synthesis of reason and emotion.


Author(s):  
Mandy Goram ◽  
Dirk Veiel

Intelligent systems and assistants should help users to complete tasks and support them at work, on the road and at home. At the same time, these systems are becoming increasingly sophisticated and autonomous in their decisions and are already taking over simple tasks from us today. In order to not lose control over their own data and to avoid the risk of user manipulation, these systems must comply with ethical and legal guidelines. In this chapter, the authors describe a novel generic approach and its realization for the development of intelligent systems that allow flexible modeling of ethical and legal aspects.


Author(s):  
Marten H. L. Kaas

The ethical decision-making and behaviour of artificially intelligent systems is increasingly important given the prevalence of these systems and the impact they can have on human well-being. Many current approaches to implementing machine ethics utilize top-down approaches, that is, ensuring the ethical decision-making and behaviour of an agent via its adherence to explicitly defined ethical rules or principles. Despite the attractiveness of this approach, this chapter explores how all top-down approaches to implementing machine ethics are fundamentally limited and how bottom-up approaches, in particular, reinforcement learning methods, are not beset by the same problems as top-down approaches. Bottom-up approaches possess significant advantages that make them better suited for implementing machine ethics.


Sign in / Sign up

Export Citation Format

Share Document