scholarly journals Toward a Normative Model of Meaningful Human Control over Weapons Systems

2021 ◽  
Vol 35 (2) ◽  
pp. 245-272
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

AbstractThe notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems.

2021 ◽  
Vol 5 (1) ◽  
pp. 53-72
Author(s):  
Elke Schwarz

In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.


Author(s):  
Natella Sinyaeva

The article examines the issues of possible control, from the standpoint of international humanitarian law, at the stage of developing autonomous weapons systems. The author notes that the development of autonomous weapons systems raises serious social and ethical concerns. He considers the existing norms and principles of international humanitarian law applying to control the development and use of such systems. The author considers autonomous weapons systems from the perspective of the distinction between civilians (civilian targets) and combatants (military objects), that means precautions in attack and proportionality.


Author(s):  
Laura A. Dickinson

The rise of lethal autonomous weapons systems creates numerous problems for legal regimes meant to ensure public accountability for unlawful uses of force. In particular, international humanitarian law has long relied on enforcement through individual criminal responsibility, which is complicated by autonomous weapons that fragment responsibility for decisions to deploy violence. Accordingly, there may often be no human being with the requisite level of intent to trigger individual responsibility under existing doctrine. In response, perhaps international criminal law could be reformed to account for such issues. Or, in the alternative, greater emphasis on other forms of accountability, such as tort liability and state responsibility might be useful supplements. Another form of accountability that often gets overlooked or dismissed as inconsequential is one that could be termed “administrative accountability.” This chapter provides a close look at this type of accountability and its potential.


2020 ◽  
Author(s):  
Daniele Amoroso

Recent advances in robotics and AI have paved the way to robots autonomously performing a wide variety of tasks in ethically and legally sensitive domains. Among them, a prominent place is occupied by Autonomous Weapons Systems (or AWS), whose legality under international law is currently at the center of a heated academic and diplomatic debate. The AWS debate provides a uniquely representative sample of the (potentially) disruptive impact of new technologies on norms and principles of international law, in that it touches on key questions of international humanitarian law, international human rights law, international criminal law, and State responsibility. Against this backdrop, this book’s primary aim is to explore the international legal implications of autonomy in weapons systems, by inquiring what existing international law has to say in this respect, to what extent the persisting validity of its principles and categories is challenged, and what could be a way forward for future international regulation on the matter. From a broader perspective, the research carried out on the issue of the legality of AWS under international law aspires to offer some more general insights on the normative aspects of the shared control relationship between human decision-makers and artificial agents. Daniele Amoroso is Professor of International Law at the Law Department of the University of Cagliari and member of the International Committee for Robot Arms Control (ICRAC) contrattualistica internazionale” presso il Ministero del Commercio con l’Estero (ora Ministero dello Sviluppo Economico – Commercio Internazionale).


Author(s):  
Tim McFarland ◽  
Jai Galliott

The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these concerns, Meaningful Human Control has risen to prominence as a framing concept in the ongoing international debate. This chapter demonstrates how, in addition to the lack of a universally accepted precise definition, reliance on Meaningful Human Control is conceptually flawed. Overall, this chapter analyzes, problematizes, and explores the nebulous concept of Meaningful Human Control, and in doing so demonstrates that it relies on the mistaken premise that the development of autonomous capabilities in weapons systems constitutes a lack of human control that somehow presents an insurmountable challenge to existing International Humanitarian Law.


2021 ◽  
pp. 237-258
Author(s):  
S. Kate Devitt

The rise of human-information systems, cybernetic systems, and increasingly autonomous systems requires the application of epistemic frameworks to machines and human-machine teams. This chapter discusses higher-order design principles to guide the design, evaluation, deployment, and iteration of Lethal Autonomous Weapons Systems (LAWS) based on epistemic models. Epistemology is the study of knowledge. Epistemic models consider the role of accuracy, likelihoods, beliefs, competencies, capabilities, context, and luck in the justification of actions and the attribution of knowledge. The aim is not to provide ethical justification for or against LAWS, but to illustrate how epistemological frameworks can be used in conjunction with moral apparatus to guide the design and deployment of future systems. The models discussed in this chapter aim to make Article 36 reviews of LAWS systematic, expedient, and evaluable. A Bayesian virtue epistemology is proposed to enable justified actions under uncertainty that meet the requirements of the Laws of Armed Conflict and International Humanitarian Law. Epistemic concepts can provide some of the apparatus to meet explainability and transparency requirements in the development, evaluation, deployment, and review of ethical AI.


Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


Author(s):  
van Sliedregt Elies

The reality of warfare has changed considerably over time. While most, if not all, armed conflicts were once fought between states, many are now fought within states. Particularly since the end of the Cold War the world has witnessed an outbreak of non-international armed conflicts, often of an ethnic nature. Since the laws of war are for the most part still premised on the concept of classic international armed conflict, it proved difficult to fit this law into ‘modern’ war crimes trials dealing with crimes committed during non-international armed conflicts. The criminal law process has therefore ‘updated’ the laws of war. The international criminal judge has brought the realities of modern warfare into line with the purpose of the laws of war (the prevention of unnecessary suffering and the enforcement of ‘fair play’). It is in war crimes law that international humanitarian law has been further developed. This chapter discusses the shift from war crimes law to international criminal law, the concept of state responsibility for individual liability for international crimes, and the nature and sources of international criminal law.


2019 ◽  
Vol 101 (912) ◽  
pp. 1091-1115
Author(s):  
Dustin A. Lewis

AbstractLegal controversies and disagreements have arisen about the timing and duration of numerous contemporary armed conflicts, not least regarding how to discern precisely when those conflicts began and when they ended (if indeed they have ended). The existence of several long-running conflicts – some stretching across decades – and the corresponding suffering that they entail accentuate the stakes of these debates. To help shed light on some select aspects of the duration of contemporary wars, this article analyzes two sets of legal issues: first, the notion of “protracted armed conflict” as formulated in a war-crimes-related provision of the Rome Statute of the International Criminal Court, and second, the rules, principles and standards laid down in international humanitarian law and international criminal law pertaining to when armed conflicts have come to an end. The upshot of the analysis is that under existing international law, there is no general category of “protracted armed conflict”; that the question of whether to pursue such a category raises numerous challenges; and that several dimensions of the law concerning the end of armed conflict are unsettled.


Sign in / Sign up

Export Citation Format

Share Document