scholarly journals Coupling levels of abstraction in understanding meaningful human control of autonomous weapons: a two-tiered approach

Author(s):  
Steven Umbrello

AbstractThe international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic.

Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


Author(s):  
Kenneth Anderson ◽  
Matthew C. Waxman

An international public debate over the law and ethics of autonomous weapon systems (AWS) has been underway since 2012, with those urging legal regulation of AWS under existing principles and requirements of the international law of armed conflict in argument with opponents who favour, instead, a preemptive international treaty ban on all such weapons. This chapter provides an introduction to this international debate, offering the main arguments on each side. These include disputes over defining an AWS, the morality and law of automated targeting and target selection by machine, and the interaction of humans and machines in the context of lethal weapons of war. Although the chapter concludes that a categorical ban on AWS is unjustified morally and legally—favouring the law of armed conflict’s existing case-by-case legal evaluation—it offers an exposition of arguments on each side of the AWS issue.


Author(s):  
Tim McFarland ◽  
Jai Galliott

The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these concerns, Meaningful Human Control has risen to prominence as a framing concept in the ongoing international debate. This chapter demonstrates how, in addition to the lack of a universally accepted precise definition, reliance on Meaningful Human Control is conceptually flawed. Overall, this chapter analyzes, problematizes, and explores the nebulous concept of Meaningful Human Control, and in doing so demonstrates that it relies on the mistaken premise that the development of autonomous capabilities in weapons systems constitutes a lack of human control that somehow presents an insurmountable challenge to existing International Humanitarian Law.


2019 ◽  
Vol 10 (1) ◽  
pp. 129-157 ◽  
Author(s):  
Matthijs M Maas

Amidst fears over artificial intelligence ‘arms races’, much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems (laws). Yet ‘killer robots’ hardly exhaust the potentially problematic capabilities that innovation in military AI (mai) is set to unlock. Governance initiatives narrowly focused on preserving ‘meaningful human control’ over laws therefore risk being bypassed by the technological state-of-the-art. This paper departs from the question: how can we formulate ‘innovation-proof governance’ approaches that are resilient or adaptive to future developments in military AI? I develop a typology for the ways in which mai innovation can disrupt existing international legal frameworks. This includes ‘direct’ disruption – as new types of mai capabilities elude categorization under existing regimes – as well as ‘indirect’ disruption, where new capabilities shift the risk landscape of military AI, or change the incentives or values of the states developing them. After discussing two potential objections to ‘innovation-proof governance’, I explore the advantages and shortcomings of three possible approaches to innovation-proof governance for military AI. While no definitive blueprint is offered, I suggest key considerations for governance strategies that seek to ensure that military AI remains lawful, ethical, stabilizing, and safe.


2014 ◽  
Author(s):  
M. Harbison ◽  
W. Koon ◽  
V. Smith ◽  
P. Haymon ◽  
D. Niole ◽  
...  

As a result of enhanced performance and mission requirements for Navy ships, ship design has dramatically increased the use of higher strength, lightweight steels and various local reinforcements, e.g., deck inserts, ring stiffeners, etc., in foundation designs to satisfy the design requirements for supporting machinery, consoles, and weapon systems among others. In additional to operational loading requirements, most of these foundations must also be designed to satisfy shock, vibration and other combat system requirements. While the same piece of equipment may be used in other ship contracts, the foundations are uniquely designed and require a separate analysis and drawing package. Computer modeling and Finite Element Analysis (FEA) have helped reduce the labor required to analyze foundations, but the high number of “unique” foundations as well as changes which necessitate a new analysis still create a large workload for engineers. This is further compounded by increased costs in production due to greater numbers of unique parts and materials that must be marked, stored, and retrieved later for fabrication. This goal of this project was to determine the cost-savings potential of leveraging past foundations work in designing, analyzing, and drawing foundations in the future. By the project’s conclusion Ingalls will have created a database for rapid access to previously-generated foundation information, the framework of which will be publicly available for all shipyards to populate with their own foundation information.


Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


2021 ◽  
Vol 5 (1) ◽  
pp. 53-72
Author(s):  
Elke Schwarz

In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.


Sign in / Sign up

Export Citation Format

Share Document