machine ethics
Recently Published Documents


TOTAL DOCUMENTS

133
(FIVE YEARS 66)

H-INDEX

13
(FIVE YEARS 3)

Author(s):  
V. I. Arshinov ◽  
O. A. Grimov ◽  
V. V. Chekletsov

The boundaries of social acceptance and models of convergence of human and non-human (for example, subjects of artificial intelligence) actors of digital reality are defined.The constructive creative possibilities of convergent processes in distributed neural networks are analyzed from the point of view of possible scenarios for building “friendly” human-dimensional symbioses of natural and artificial intelligence. A comprehensive analysis of new management challenges related to the development of cyber-physical and cybersocial systems is carried out.A model of social organizations and organizational behavior in the conditions of cyberphysical reality is developed.The possibilities of reconciling human moral principles and “machine ethics” in the processes of modeling and managing digital reality are studied. The significance of various concepts of digital, machine and cyber-anymism for the socio-cultural understanding of the development of modern cyber-physical technologies, the anthropological dimension of a smart city is revealed. The article introduces the concept of hybrid society and shows the development of its models as self-organizing collective systems that consist of co-evolving biohybrid and socio-technical spheres. The importance of modern anthropogenic research for sustainable development is analyzed. The process of marking ontological boundaries between heterogeneous modalities in the digital world is investigated. Examples of acute social contexts that are able to set the vector of practical philosophy in the modern digital era are considered.


2021 ◽  
Author(s):  
◽  
Brendan Vize

<p>Consider Lt. Commander Data from Star Trek: The Next Generation, the droid C3PO from Star Wars, or the Replicants that appear in Bladerunner: They can use language (or many languages), they are rational, they form relationships, they use language that suggests that they have a concept of self, and even language that suggests that they have “feelings” or emotional experience. In the films and TV shows that they appear, they are depicted as having frequent social interaction with human beings; but would we have any moral obligations to such a being if they really existed? What would we be permitted to do or not to do to them? On the one hand, a robot like Data has many of the attributes that we currently associate with a person. On the other hand, he has many of the attributes of the machines that we currently use as tools. He (and other science-fiction machines like him) closely resembles one of the things we value the most (a person), and at the same time, one of the things we value the least (an artefact), leading to an apparent ethical paradox. What is its solution?</p>


2021 ◽  
Author(s):  
◽  
Brendan Vize

<p>Consider Lt. Commander Data from Star Trek: The Next Generation, the droid C3PO from Star Wars, or the Replicants that appear in Bladerunner: They can use language (or many languages), they are rational, they form relationships, they use language that suggests that they have a concept of self, and even language that suggests that they have “feelings” or emotional experience. In the films and TV shows that they appear, they are depicted as having frequent social interaction with human beings; but would we have any moral obligations to such a being if they really existed? What would we be permitted to do or not to do to them? On the one hand, a robot like Data has many of the attributes that we currently associate with a person. On the other hand, he has many of the attributes of the machines that we currently use as tools. He (and other science-fiction machines like him) closely resembles one of the things we value the most (a person), and at the same time, one of the things we value the least (an artefact), leading to an apparent ethical paradox. What is its solution?</p>


Conatus ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 177
Author(s):  
Michael Anderson ◽  
Susan Leigh Anderson ◽  
Alkis Gounaris ◽  
George Kosteletos

At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, metaphysical and ethical questions arising by this project, as well as the realistic and pragmatic demands that dominate artificial intelligence and robotics research programs. Most of all, however, it sheds light upon the contribution of Susan and Michael Anderson regarding the introduction and undertaking of a main objective related to the creation of ethical autonomous agents, that will not be based on the “imperfect” patterns of human behavior, or on preloaded hierarchical laws and human-centric values.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Jun Kyung You

AbstractIn this paper, I argue that the replication of the effect of ethical decision-making is insufficient for achieving functional morality in artificial moral agents (AMAs). This approach is named the “as–if” approach to machine ethics. I object to this approach on the grounds that the “as if” approach requires one to commit to substantive meta-ethical claims about morality that are at least unwarranted, and perhaps even wrong. To defend this claim, this paper does three things: 1. I explain Heidegger’s Enframing [Gestell] and my notion of “Ready-Ethics,” which, in combination, can hopefully provide a plausible account for the motivation behind the “as if” approach; 2. I go over specific examples of Ethical AI projects to show how the “as if” approach commits these projects to versions of moral generalism and moral naturalism. I then explain the flaws of the views that the “as if” approach necessitates, and suggest that they cannot account for the justificatory process crucial to human moral life. I explain how Habermas’ account of the justificatory process could cast doubt on the picture of morality that the meta-ethical views of the “as if” approach proposes; 3. Finally, I defend the relevance of discussing these topics for the purpose of functional morality in AMAs.


2021 ◽  
Vol 30 (3) ◽  
pp. 459-471
Author(s):  
Henry Shevlin

AbstractThere is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.


Author(s):  
Maurice Pagnucco ◽  
David Rajaratnam ◽  
Raynaldio Limarga ◽  
Abhaya Nayak ◽  
Yang Song

AI and Ethics ◽  
2021 ◽  
Author(s):  
Andreia Martinho ◽  
Adam Poulsen ◽  
Maarten Kroesen ◽  
Caspar Chorus

AbstractThe pursuit of AMAs is complicated. Disputes about the development, design, moral agency, and future projections for these systems have been reported in the literature. This empirical study explores these controversial matters by surveying (AI) Ethics scholars with the aim of establishing a more coherent and informed debate. Using Q-methodology, we show the wide breadth of viewpoints and approaches to artificial morality. Five main perspectives about AMAs emerged from our data and were subsequently interpreted and discussed: (i) Machine Ethics: The Way Forward; (ii) Ethical Verification: Safe and Sufficient; (iii) Morally Uncertain Machines: Human Values to Avoid Moral Dystopia; (iv) Human Exceptionalism: Machines Cannot Moralize; and (v) Machine Objectivism: Machines as Superior Moral Agents. A potential source of these differing perspectives is the failure of Machine Ethics to be widely observed or explored as an applied ethic and more than a futuristic end. Our study helps improve the foundations for an informed debate about AMAs, where contrasting views and agreements are disclosed and appreciated. Such debate is crucial to realize an interdisciplinary approach to artificial morality, which allows us to gain insights into morality while also engaging practitioners.


Author(s):  
Nicholas Smith ◽  
Darby Vickers

AbstractAs artificial intelligence (AI) becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that our Strawsonian approach is either the only one worthy of consideration or the obviously correct approach, but we think it is preferable to trying to marry fundamentally different ideas of moral responsibility (i.e. one for AI, one for humans) into a single cohesive account. Under a Strawsonian framework, people are morally responsible when they are appropriately subject to a particular set of attitudes—reactive attitudes—and determine under what conditions it might be appropriate to subject machines to this same set of attitudes. Although the Strawsonian account traditionally applies to individual humans, it is plausible that entities that are not individual humans but possess these attitudes are candidates for moral responsibility under a Strawsonian framework. We conclude that weak AI is never morally responsible, while a strong AI with the right emotional capacities may be morally responsible.


2021 ◽  
Vol 29 ◽  
Author(s):  
Coetzee Bester ◽  
Rachel Fischer

This article rethinks the position of Information Ethics (IE) vis-à-vis the growing discipline of the ethics of AI. While IE has a long and respected academic history, the discipline of the ethics of AI is much younger. The scope of the latter discipline has exploded in the last decade in sync with the explosion of data driven AI. Currently, the ethics of AI as a discipline can be said to have sub-divided at least into machine ethics, robot ethics, data ethics, and neuro ethics. The argument presented here is that ethics of AI can from one perspective be viewed as a sub-discipline of IE. IE is at the heart of ethical concerns about the potential de-humanising impact of AI technologies, as it addresses issues relating to communication, the status of knowledge claims, and the quality of media-generated information, among many others. Perhaps the single most concerning ethical concern in the context of data-driven AI technology is the rise of new social narratives that threaten humans’ special sense of agency and, and this is firstly an IE concern. The article thus argues for the independent position of IE as well as for its position as the core, over-arching discipline, of the ethics of AI.


Sign in / Sign up

Export Citation Format

Share Document