A Study of Machine Ethics in Human-Artificial Intelligence Interactions

Author(s):  
Haoran Sun ◽  
Pei-Luen Patrick Rau ◽  
Bingcheng Wang
2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


Author(s):  
Silviya Serafimova

Abstract Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility for building what—one might call strong “moral” AI scenarios—is questioned. The possibility of weak “moral” AI scenarios is likewise discussed critically.


Author(s):  
Thomas M. Powers ◽  
Jean-Gabriel Ganascia

This chapter discusses several challenges for doing the ethics of artificial intelligence (AI). The challenges fall into five major categories: conceptual ambiguities within philosophy and AI scholarship; the estimation of AI risks; implementing machine ethics; epistemic issues of scientific explanation and prediction in what can be called computational data science (CDS), which includes “big data” science; and oppositional versus systemic ethics approaches. The chapter then argues that these ethical problems are not likely to yield to the “common approaches” of applied ethics. Primarily due to the transformational nature of artificial intelligence within science, engineering, and human culture, novel approaches will be needed to address the ethics of AI in the future. Moreover, serious barriers to the formalization of ethics will be needed to overcome to implement ethics in AI.


2011 ◽  
Vol 21 ◽  
pp. 35-39 ◽  
Author(s):  
Nick Collins

Increased maturity in modeling human musicianship leads to many interesting artistic achievements and challenges. This article takes the opportunity to reflect on future situations in which virtual musicians are traded like baseball cards, associated content-creator and autonomous musical agent rights, and the musical and moral conundrums that may result. Although many scenarios presented here may seem far-fetched with respect to the current level of artificial intelligence, it remains prudent and artistically stimulating to consider them. Accepting basic human curiosity and research teleology, it is salutary to consider the more distant consequences of our actions with respect to aesthetics and ethics.


2021 ◽  
Vol 30 (3) ◽  
pp. 459-471
Author(s):  
Henry Shevlin

AbstractThere is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


Biotechnology ◽  
2019 ◽  
pp. 1675-1687
Author(s):  
Alice Pavaloiu

The field of artificial intelligence has recently encountered some ethical questions associated with the future of humankind. Although it is a common question that has been asked for years, the existence of humankind against badly configured intelligent systems is more important nowadays. As a result of rapid developments in intelligent systems and their increasing role in our life, there is a remarkable anxiety about dangerous artificial intelligence. Because of that, some research interests gathered under some topics like machine ethics, future of artificial intelligence, and even existential risks are drawing researchers' interest. As associated with this state, the objective of this chapter is to examine ethical factors in using intelligent systems for biomedical-engineering-oriented purposes. The chapter firstly gives essential information about the background and then considers possible scenarios that may require ethical adjustments during design and development of artificial-intelligence-oriented systems for biomedical engineering problems.


Author(s):  
Nicholas Smith ◽  
Darby Vickers

AbstractAs artificial intelligence (AI) becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that our Strawsonian approach is either the only one worthy of consideration or the obviously correct approach, but we think it is preferable to trying to marry fundamentally different ideas of moral responsibility (i.e. one for AI, one for humans) into a single cohesive account. Under a Strawsonian framework, people are morally responsible when they are appropriately subject to a particular set of attitudes—reactive attitudes—and determine under what conditions it might be appropriate to subject machines to this same set of attitudes. Although the Strawsonian account traditionally applies to individual humans, it is plausible that entities that are not individual humans but possess these attitudes are candidates for moral responsibility under a Strawsonian framework. We conclude that weak AI is never morally responsible, while a strong AI with the right emotional capacities may be morally responsible.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


Author(s):  
V. I. Arshinov ◽  
O. A. Grimov ◽  
V. V. Chekletsov

The boundaries of social acceptance and models of convergence of human and non-human (for example, subjects of artificial intelligence) actors of digital reality are defined.The constructive creative possibilities of convergent processes in distributed neural networks are analyzed from the point of view of possible scenarios for building “friendly” human-dimensional symbioses of natural and artificial intelligence. A comprehensive analysis of new management challenges related to the development of cyber-physical and cybersocial systems is carried out.A model of social organizations and organizational behavior in the conditions of cyberphysical reality is developed.The possibilities of reconciling human moral principles and “machine ethics” in the processes of modeling and managing digital reality are studied. The significance of various concepts of digital, machine and cyber-anymism for the socio-cultural understanding of the development of modern cyber-physical technologies, the anthropological dimension of a smart city is revealed. The article introduces the concept of hybrid society and shows the development of its models as self-organizing collective systems that consist of co-evolving biohybrid and socio-technical spheres. The importance of modern anthropogenic research for sustainable development is analyzed. The process of marking ontological boundaries between heterogeneous modalities in the digital world is investigated. Examples of acute social contexts that are able to set the vector of practical philosophy in the modern digital era are considered.


Sign in / Sign up

Export Citation Format

Share Document