scholarly journals Towards Moral Machines: A Discussion with Michael Anderson and Susan Leigh Anderson

Conatus ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 177
Author(s):  
Michael Anderson ◽  
Susan Leigh Anderson ◽  
Alkis Gounaris ◽  
George Kosteletos

At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, metaphysical and ethical questions arising by this project, as well as the realistic and pragmatic demands that dominate artificial intelligence and robotics research programs. Most of all, however, it sheds light upon the contribution of Susan and Michael Anderson regarding the introduction and undertaking of a main objective related to the creation of ethical autonomous agents, that will not be based on the “imperfect” patterns of human behavior, or on preloaded hierarchical laws and human-centric values.

Author(s):  
Estifanos Tilahun Mihret

Artificial intelligence and robotics are very recent technologies and risks for our world. They are developing their capacity dramatically and shifting their origins of developing intention to other dimensions. When humans see the past histories of AI and robotics, human beings can examine and understand the objectives and intentions of them which to make life easy and assist human beings within different circumstances and situations. However, currently and in the near future, due to changing the attitude of robotic and AI inventors and experts as well as based on the AI nature that their capacity of environmental acquisition and adaptation, they may become predators and put creatures at risk. They may also inherit the full nature of creatures. Thus, finally they will create their new universe or the destiny of our universe will be in danger.


Anaesthesia ◽  
2021 ◽  
Vol 76 (S1) ◽  
pp. 171-181 ◽  
Author(s):  
M. McKendrick ◽  
S. Yang ◽  
G. A. McLeod

Author(s):  
Stamatis Karnouskos

AbstractThe rapid advances in Artificial Intelligence and Robotics will have a profound impact on society as they will interfere with the people and their interactions. Intelligent autonomous robots, independent if they are humanoid/anthropomorphic or not, will have a physical presence, make autonomous decisions, and interact with all stakeholders in the society, in yet unforeseen manners. The symbiosis with such sophisticated robots may lead to a fundamental civilizational shift, with far-reaching effects as philosophical, legal, and societal questions on consciousness, citizenship, rights, and legal entity of robots are raised. The aim of this work is to understand the broad scope of potential issues pertaining to law and society through the investigation of the interplay of law, robots, and society via different angles such as law, social, economic, gender, and ethical perspectives. The results make it evident that in an era of symbiosis with intelligent autonomous robots, the law systems, as well as society, are not prepared for their prevalence. Therefore, it is now the time to start a multi-disciplinary stakeholder discussion and derive the necessary policies, frameworks, and roadmaps for the most eminent issues.


2021 ◽  
Vol 14 (8) ◽  
pp. 339
Author(s):  
Tatjana Vasiljeva ◽  
Ilmars Kreituss ◽  
Ilze Lulle

This paper looks at public and business attitudes towards artificial intelligence, examining the main factors that influence them. The conceptual model is based on the technology–organization–environment (TOE) framework and was tested through analysis of qualitative and quantitative data. Primary data were collected by a public survey with a questionnaire specially developed for the study and by semi-structured interviews with experts in the artificial intelligence field and management representatives from various companies. This study aims to evaluate the current attitudes of the public and employees of various industries towards AI and investigate the factors that affect them. It was discovered that attitude towards AI differs significantly among industries. There is a significant difference in attitude towards AI between employees at organizations with already implemented AI solutions and employees at organizations with no intention to implement them in the near future. The three main factors which have an impact on AI adoption in an organization are top management’s attitude, competition and regulations. After determining the main factors that influence the attitudes of society and companies towards artificial intelligence, recommendations are provided for reducing various negative factors. The authors develop a proposition that justifies the activities needed for successful adoption of innovative technologies.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


Author(s):  
Sam Hepenstal ◽  
Leishi Zhang ◽  
Neesha Kodogoda ◽  
B.L. William Wong

Criminal investigations are guided by repetitive and time-consuming information retrieval tasks, often with high risk and high consequence. If Artificial intelligence (AI) systems can automate lines of inquiry, it could reduce the burden on analysts and allow them to focus their efforts on analysis. However, there is a critical need for algorithmic transparency to address ethical concerns. In this paper, we use data gathered from Cognitive Task Analysis (CTA) interviews of criminal intelligence analysts and perform a novel analysis method to elicit question networks. We show how these networks form an event tree, where events are consolidated by capturing analyst intentions. The event tree is simplified with a Dynamic Chain Event Graph (DCEG) that provides a foundation for transparent autonomous investigations.


Proceedings ◽  
2020 ◽  
Vol 63 (1) ◽  
pp. 44
Author(s):  
Lavinia Andrei ◽  
Doru-Laurean Baldean ◽  
Adela-Ioana Borzan

A control program was designed with Unity 5 virtual reality application in the automotive and robotics field. Thus, a virtual model of a robotic car was tested in a virtual reality program. After optimization, the smart controller was implemented on a specific model of the automated Chevrolet Camaro. The main objective of the present paper is to design a control program model to be tested in virtual reality and in a real-size car. Results concerning the virtual modeling of an automated car and its artificial intelligence controls have been presented and discussed, outlining the forces, torques, and context awareness capabilities of the car.


2019 ◽  
Vol 3 (2) ◽  
pp. 34
Author(s):  
Hiroshi Yamakawa

In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.


Sign in / Sign up

Export Citation Format

Share Document