scholarly journals Artificial Intelligence and Moral intelligence

Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.

Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


Author(s):  
Silviya Serafimova

Abstract Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility for building what—one might call strong “moral” AI scenarios—is questioned. The possibility of weak “moral” AI scenarios is likewise discussed critically.


Author(s):  
Дмитрий Александрович Коростелев ◽  
Dmitriy Aleksandrovich Korostelev ◽  
Алексей Радченко ◽  
Aleksey Radchenko ◽  
Никита Сильченко ◽  
...  

The paper describes the solution to the problem of testing the efficiency of new ideas and algorithms for intelligent systems. Simulation of interaction of the corresponding intelligent agents in a competitive form implementing different algorithms is proposed to use as the main approach to the solution. To support this simulation, a specialized software platform is used. The paper describes the platform developed for running competitions in artificial intelligence and its subsystems: a server, a client and visualization. Operational testing of the developed system is also described which helps to evaluate the efficiency of various algorithms of artificial intelligence in relation to the simulation like "Naval Battle".


2015 ◽  
Vol 13 (3/4) ◽  
pp. 314-325
Author(s):  
Anne Gerdes

Purpose – This paper aims to explore human technology relations through the lens of sci-fi movies within the life cycle of the ETHICOMP conference series. Here, different perspectives on artificial intelligent agents, primarily in the shape of robots, but also including other kinds of intelligent systems, are explored. Hence, IT-ethical issues related to humans interactions with social robots and artificial intelligent agents are illustrated with reference to: Alex Proyas’ I, Robot; James Cameron’s Terminator; and the Wachowski brothers’ Matrix. All three movies present robots cast in the roles of moral agents capable of doing good or evil. Steven Spielberg’s Artificial Intelligence, A.I. gives rise to a discussion of the robot seen as a moral patient and furthermore reflects on possibilities for care and trust relations between robots and humans. Andrew Stanton’s Wall-E shapes a discussion of robots as altruistic machines in the role as facilitators of a flourishing society. Steven Spielberg’s Minority Report allows for a discussion of knowledge-discovering technology and the possibility for balancing data utility and data privacy. Design/methodology/approach – Observations of themes in sci-fi movies within the life span of the ETHICOMP conference series are discussed with the purpose of illustrating ways in which science fiction reflects (science) faction. In that sense, science fiction does not express our worries for a distant future, but rather casts light over questions, which is of concern in the present time. Findings – Human technology interactions are addressed and it is shown how sci-fi films highlight philosophical questions that puzzle us today, such as which kind of relationships can and ought to be formed with robots, and whether the roles they play as social actors demand that one ought to assign moral standing to them. The paper does not present firm answers but instead pays attention to the selection and framing of questions that deserve attention. Originality/value – To relate sci-fi movies to topics raised during the past 20 years of the ETHICOMP conference series, seemed to be an appropriate way of celebrating the 20-year anniversary of the ETHICOMP conference series.


2020 ◽  
Author(s):  
Tore Pedersen ◽  
Christian Johansen

Artificial Intelligence (AI) receives attention in media as well as in academe and business. In media coverage and reporting, AI is predominantly described in contrasted terms, either as the ultimate solution to all human problems or the ultimate threat to all human existence. In academe, the focus of computer scientists is on developing systems that function, whereas philosophy scholars theorize about the implications of this functionality for human life. In the interface between technology and philosophy there is, however, one imperative aspect of AI yet to be articulated: How do intelligent systems make inferences? We use the overarching concept “Artificial Intelligent Behaviour” which would include both cognition/processing and judgment/behaviour. We argue that due to the complexity and opacity of Artificial Inference, one needs to initiate systematic empirical studies of artificial intelligent behavior similar to what has previously been done to study human cognition, judgment and decision making. This will provide valid knowledge, outside of what current computer science methods can offer, about the judgments and decisions made by intelligent systems. Moreover, outside academe – in the public as well as the private sector – expertise in epistemology, critical thinking and reasoning are crucial to ensure human oversight of the artificial intelligent judgments and decisions that are made, because only competent human insight into AI-inference processes will ensure accountability. Such insights require systematic studies of AI-behaviour founded on the natural sciences and philosophy, as well as the employment of methodologies from the cognitive and behavioral sciences.


Author(s):  
Pat Langley

Modern introductory courses on AI do not train students to create intelligent systems or provide broad coverage of this complex field. In this paper, we identify problems with common approaches to teaching artificial intelligence and suggest alternative principles that courses should adopt instead. We illustrate these principles in a proposed course that teaches students not only about component methods, such as pattern matching and decision making, but also about their combination into higher-level abilities for reasoning, sequential control, plan generation, and integrated intelligent agents. We also present a curriculum that instantiates this organization, including sample programming exercises and a project that requires system integration. Participants also gain experience building knowledge-based agents that use their software to produce intelligent behavior.


2018 ◽  
Vol 9 (4) ◽  
pp. 677-689
Author(s):  
Anne GERDES

AbstractsThe article provides an inclusive outlook on artificial intelligence by introducing a three-legged design perspective that includes, but also moves beyond, ethical artificial systems design to stress the role of moral habituation of professionals and the general public. It is held that an inclusive ethical design perspective is essential for a flourishing future with artificial intelligence.


Author(s):  
S. Matthew Liao

This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.


2020 ◽  
Vol 2 (5) ◽  
pp. 278-288
Author(s):  
SRIKANTH REDDY MANDATI

The development of Information Technology has the power to make a computer think and act like a human being. Artificial intelligence is a special feature of information technology that involves developing a machine that works and responds like a human mind. The main features of artificial intelligence take into account the sensitivity of human senses. The system is able to recognize speech and touch as features set in the system to carry out the tasks of a normal state of health without human assistance. However, the wisdom of implanting the study of intelligent agents who take the environment and achieve their goal successfully. In the computer world. Most systems are designed to achieve objectives depending on the nature of the situation but on the use of special features derived from existing natural features of humans and animals. In general, an engagement thinker is a human relative who uses learning and problem-solving techniques to understand high levels of activity in human-inspired activity, the emotional process and decision-making. Architects are technically superior to human ingenuity, past and present exploratory research conducted extensively in China and the United States and a series of developments in line with future aspirations or technologies.


2021 ◽  
Vol 11 (5) ◽  
pp. 317-324
Author(s):  
Réka Pusztahelyi

This essay deals with certain civil liability implications of artificial intelligent systems in the light of the recent steps taken by the European Union. In order to create not only an ethical but also a lawful AI, the EU strives to lay down the framework of the future common liability rules for damages and harms caused by any application of AI technology. The Commission’s new Proposal (Artificial Intelligence Act, AIA) reflects on an innovative approach to the regulation which can tackle with the special features of the AI systems, lays down rules according to the risk management approach and the class-of-application-by-class-of-application approach. In this essay, the strict-based liability for high-risk AI systems and the concept of the frontend and backend operators are in the focal point.


Sign in / Sign up

Export Citation Format

Share Document