scholarly journals Autonomous Weapons Systems, Artificial Intelligence, and the Problem of Meaningful Human Control

2021 ◽  
Vol 5 (1) ◽  
pp. 53-72
Author(s):  
Elke Schwarz

In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.

Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


2020 ◽  
Vol 1 (4) ◽  
pp. 187-194
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

Abstract Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.


2021 ◽  
Vol 35 (2) ◽  
pp. 245-272
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

AbstractThe notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems.


2020 ◽  
Author(s):  
Marc Canellas ◽  
Rachel Haga

CITE AS: M. C. Canellas and R. A. Haga, "Toward meaningful human control of autonomous weapons systems through function allocation," 2015 IEEE International Symposium on Technology and Society (ISTAS), Dublin, 2015, pp. 1-7. doi: 10.1109/ISTAS.2015.7439432 One of the few convergent themes during the first two United Nations Meeting of Experts on autonomous weapons systems (AWS) in 2014 and 2015 was the requirement that there be meaningful human control (MHC) of AWS. What exactly constitutes MHC, however, is still ill-defined. While multiple sets of definitions and analyses have been published and discussed, this work seeks to address two key issues with the current definitions: (1) they are inconsistent in what authorities and responsibilities of human and automated agents need to be regulated, and (2) they lack the specificity that would be required for designers to systemically integrate these restrictions into AWS designs. Given that MHC centers on the interaction of human and autonomous agents, we leverage the models and metrics of function allocation - the allocation of work between human and autonomous agents - to analyze and compare definitions of MHC and the definitions of AWS proposed by the U.S. Department of Defense. Specifically, we transform the definitions into function allocation form to model and compare the definitions, and then show how a mismatch between authority and responsibility in an exemplar military scenario can still plague the human-AWS interactions. In summary, this paper provides a starting point for future research to investigate the application of function allocation to the questions of MHC and more generally, the development of rules and standards for incorporating AWS into the law of armed conflict.


2015 ◽  
Vol 6 (2) ◽  
pp. 247-283 ◽  
Author(s):  
Jeroen van den Boogaard

Given the swift technologic development, it may be expected that the availability of the first truly autonomous weapons systems is fast approaching. Once they are deployed, these weapons will use artificial intelligence to select and attack targets without further human intervention. Autonomous weapons systems raise the question of whether they could comply with international humanitarian law. The principle of proportionality is sometimes cited as an important obstacle to the use of autonomous weapons systems in accordance with the law. This article assesses the question whether the rule on proportionality in attacks would preclude the legal use of autonomous weapons. It analyses aspects of the proportionality rule that would militate against the use of autonomous weapons systems and aspects that would appear to benefit the protection of the civilian population if such weapons systems were used. The article concludes that autonomous weapons are unable to make proportionality assessments on an operational or strategic level on their own, and that humans should not be expected to be completely absent from the battlefield in the near future.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 216
Author(s):  
Steven Umbrello ◽  
Nathan Gabriel Wood

Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected to emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more sophisticated and increasingly more capable than flesh-and-blood soldiers, it will increasingly be the case that such soldiers are “in the power” of those AWS which fight against them. This implies that such soldiers ought to be considered hors de combat, and not targeted. In arguing for this point, we draw out a broader conclusion regarding hors de combat status, namely that it must be viewed contextually, with close reference to the capabilities of combatants on both sides of any discreet engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions ,we argue that each particular AWS will likely need its own standard for when enemy soldiers are deemed hors de combat. We conclude by examining how these nuanced views of hors de combat status might impact on meaningful human control of AWS.


Author(s):  
Tim McFarland ◽  
Jai Galliott

The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these concerns, Meaningful Human Control has risen to prominence as a framing concept in the ongoing international debate. This chapter demonstrates how, in addition to the lack of a universally accepted precise definition, reliance on Meaningful Human Control is conceptually flawed. Overall, this chapter analyzes, problematizes, and explores the nebulous concept of Meaningful Human Control, and in doing so demonstrates that it relies on the mistaken premise that the development of autonomous capabilities in weapons systems constitutes a lack of human control that somehow presents an insurmountable challenge to existing International Humanitarian Law.


2019 ◽  
Vol 10 (1) ◽  
pp. 129-157 ◽  
Author(s):  
Matthijs M Maas

Amidst fears over artificial intelligence ‘arms races’, much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems (laws). Yet ‘killer robots’ hardly exhaust the potentially problematic capabilities that innovation in military AI (mai) is set to unlock. Governance initiatives narrowly focused on preserving ‘meaningful human control’ over laws therefore risk being bypassed by the technological state-of-the-art. This paper departs from the question: how can we formulate ‘innovation-proof governance’ approaches that are resilient or adaptive to future developments in military AI? I develop a typology for the ways in which mai innovation can disrupt existing international legal frameworks. This includes ‘direct’ disruption – as new types of mai capabilities elude categorization under existing regimes – as well as ‘indirect’ disruption, where new capabilities shift the risk landscape of military AI, or change the incentives or values of the states developing them. After discussing two potential objections to ‘innovation-proof governance’, I explore the advantages and shortcomings of three possible approaches to innovation-proof governance for military AI. While no definitive blueprint is offered, I suggest key considerations for governance strategies that seek to ensure that military AI remains lawful, ethical, stabilizing, and safe.


Sign in / Sign up

Export Citation Format

Share Document