scholarly journals A critique of the ‘as–if’ approach to machine ethics

AI and Ethics ◽  
2021 ◽  
Author(s):  
Jun Kyung You

AbstractIn this paper, I argue that the replication of the effect of ethical decision-making is insufficient for achieving functional morality in artificial moral agents (AMAs). This approach is named the “as–if” approach to machine ethics. I object to this approach on the grounds that the “as if” approach requires one to commit to substantive meta-ethical claims about morality that are at least unwarranted, and perhaps even wrong. To defend this claim, this paper does three things: 1. I explain Heidegger’s Enframing [Gestell] and my notion of “Ready-Ethics,” which, in combination, can hopefully provide a plausible account for the motivation behind the “as if” approach; 2. I go over specific examples of Ethical AI projects to show how the “as if” approach commits these projects to versions of moral generalism and moral naturalism. I then explain the flaws of the views that the “as if” approach necessitates, and suggest that they cannot account for the justificatory process crucial to human moral life. I explain how Habermas’ account of the justificatory process could cast doubt on the picture of morality that the meta-ethical views of the “as if” approach proposes; 3. Finally, I defend the relevance of discussing these topics for the purpose of functional morality in AMAs.

2016 ◽  
Vol 14 (3) ◽  
pp. 231-253 ◽  
Author(s):  
Rollin M. Omari ◽  
Masoud Mohammadian

Purpose The developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. In contrast to computer ethics, machine ethics is concerned with the behavior of machines toward human users and other machines. This study aims to use an action-based ethical theory founded on the combinational aspects of deontological and teleological theories of ethics in the construction of an artificial moral agent (AMA). Design/methodology/approach The decision results derived by the AMA are acquired via fuzzy logic interpretation of the relative values of the steady-state simulations of the corresponding rule-based fuzzy cognitive map (RBFCM). Findings Through the use of RBFCMs, the following paper illustrates the possibility of incorporating ethical components into machines, where latent semantic analysis (LSA) and RBFCMs can be used to model dynamic and complex situations, and to provide abilities in acquiring causal knowledge. Research limitations/implications This approach is especially appropriate for data-poor and uncertain situations common in ethics. Nonetheless, to ensure that a machine with an ethical component can function autonomously in the world, research in artificial intelligence will need to further investigate the representation and determination of ethical principles, the incorporation of these ethical principles into a system’s decision procedure, ethical decision-making with incomplete and uncertain knowledge, the explanation for decisions made using ethical principles and the evaluation of systems that act based upon ethical principles. Practical implications To date, the conducted research has contributed to a theoretical foundation for machine ethics through exploration of the rationale and the feasibility of adding an ethical dimension to machines. Further, the constructed AMA illustrates the possibility of utilizing an action-based ethical theory that provides guidance in ethical decision-making according to the precepts of its respective duties. The use of LSA illustrates their powerful capabilities in understanding text and their potential application as information retrieval systems in AMAs. The use of cognitive maps provides an approach and a decision procedure for resolving conflicts between different duties. Originality/value This paper suggests that cognitive maps could be used in AMAs as tools for meta-analysis, where comparisons regarding multiple ethical principles and duties can be examined and considered. With cognitive mapping, complex and abstract variables that cannot easily be measured but are important to decision-making can be modeled. This approach is especially appropriate for data-poor and uncertain situations common in ethics.


Author(s):  
Marten H. L. Kaas

The ethical decision-making and behaviour of artificially intelligent systems is increasingly important given the prevalence of these systems and the impact they can have on human well-being. Many current approaches to implementing machine ethics utilize top-down approaches, that is, ensuring the ethical decision-making and behaviour of an agent via its adherence to explicitly defined ethical rules or principles. Despite the attractiveness of this approach, this chapter explores how all top-down approaches to implementing machine ethics are fundamentally limited and how bottom-up approaches, in particular, reinforcement learning methods, are not beset by the same problems as top-down approaches. Bottom-up approaches possess significant advantages that make them better suited for implementing machine ethics.


2015 ◽  
Vol 3 (4) ◽  
pp. 359-364 ◽  
Author(s):  
Karin L. Price ◽  
Margaret E. Lee ◽  
Gia A. Washington ◽  
Mary L. Brandt

1992 ◽  
Author(s):  
Michael C. Gottlieb ◽  
◽  
Jack R. Sibley

Author(s):  
Vykinta Kligyte ◽  
Shane Connelly ◽  
Chase E. Thiel ◽  
Lynn D. Devenport ◽  
Ryan P. Brown ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document