scholarly journals Debate on the Ethics of Developing AI for Lethal Autonomous Weapons

2021 ◽  
Vol 5 (1) ◽  
pp. 133-142
Author(s):  
Jai Galliott ◽  
John Forge

In this philosophical debate on the ethics of developing AI for Lethal Autonomous Weapons, Jai Galliott argues that a “blanket prohibition on ‘AI in weapons,’ or participation in the design and engineering of artificially intelligent weapons, would have unintended consequences due to its lack of nuance.” In contrast to Galliott, John Forge contends that “the only course of action for a moral person is not to engage in weapons research.”

Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


2018 ◽  
Vol 6 (1) ◽  
pp. 183
Author(s):  
Elliot Winter

Autonomous machines are moving rapidly from science fiction to science fact. The defining feature of this technology is that it can operate independently of human control. Consequently, society must consider how ‘decisions’ are to be made by autonomous machines. The matter is particularly acute in circumstances where harm is inevitable no matter what course of action is taken. This dilemma has been identified in the context of autonomous vehicles driving under the regulation of domestic law and, there, governments seem to be moving towards a utilitarian solution to inevitable harm. This leads one to question whether utilitarianism should be transposed into the context of autonomous weapons which might soon operate on the battlefield under the gaze of humanitarian law. The argument here is that it should because humanitarian law includes the core principle of ‘proportionality’, which is fundamentally a utilitarian concept – requiring that any gain derived from an attack outweighs the harm caused. However, while human soldiers are always able to come to a view on proportionality, albeit subjective, there is much doubt over how an autonomous weapon might determine what is proportionate. There is a very large gap between our embryonic understanding of utilitarianism in relation to autonomous vehicles manoeuvring around a city on one hand; and what would be required for armed robots patrolling a battlespace on the other. Bridging this gap is fraught with difficulty but perhaps the best starting point is to take Bentham’s expression of utilitarian mechanics and build upon them. With conscious effort and, ideally, collaboration, states could use the process of applying his classic theory to this very modern problem to raise the standard of protection offered to those caught up in conflict.


2020 ◽  
Vol 43 ◽  
Author(s):  
Dan Simon ◽  
Keith J. Holyoak

Abstract Cushman characterizes rationalization as the inverse of rational reasoning, but this distinction is psychologically questionable. Coherence-based reasoning highlights a subtler form of bidirectionality: By distorting task attributes to make one course of action appear superior to its rivals, a patina of rationality is bestowed on the choice. This mechanism drives choice and action, rather than just following in their wake.


2020 ◽  
Vol 19 (2) ◽  
pp. 63-74
Author(s):  
Klaus Moser ◽  
Hans-Georg Wolff ◽  
Roman Soucek

Abstract. Escalation of commitment occurs when a course of action is continued despite repeated drawbacks (e.g., maintaining an employment relationship despite severe performance problems). We analyze process accountability (PA) as a de-escalation technique that helps to discontinue a failing course of action and show how time moderates both the behavioral and cognitive processes involved: (1) Because sound decisions should be based on (hopefully unbiased) information search, which requires time to gather, the effect of PA on de-escalation increases over time. (2) Because continuing information search creates behavioral commitment, the debiasing effect of PA on information search diminishes over time. (3) Consistent with the tunnel vision notion, the effects of less biased information search on de-escalation decrease over time.


2000 ◽  
Author(s):  
Jerzy W. Rozenblit ◽  
◽  
Michael J. Barnes ◽  
Faisal Momen ◽  
Jose A. Quijada ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document