scholarly journals The Design of Human Oversight in Autonomous Weapon Systems

Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.

Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


2021 ◽  
Vol 5 (1) ◽  
pp. 53-72
Author(s):  
Elke Schwarz

In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.


2020 ◽  
Vol 1 (4) ◽  
pp. 187-194
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

Abstract Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.


The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapons systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapons Systems, and the great powers have largely refused to support an effective ban. In this vacuum, the public has been presented with a heavily one-sided view of “Killer Robots.” This volume presents a more nuanced approach to autonomous weapon systems that recognizes the need to progress beyond a discourse framed by the Terminator and HAL 9000. Reshaping the discussion around this emerging military innovation requires a new line of thought and a willingness to challenge the orthodoxy. Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare therefore focuses on exploring the moral and legal issues associated with the design, development, and deployment of lethal autonomous weapons. In this volume, we bring together some of the most prominent academics and academic-practitioners in the lethal autonomous weapons space and seek to return some balance to the debate. As part of this effort, we recognize that society needs to invest in hard conversations that tackle the ethics, morality, and law of these new digital technologies and understand the human role in their creation and operation.


Author(s):  
Steven Umbrello

AbstractThe international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic.


Author(s):  
Naresh Kshetri

Computer Ethics study has reached a point where Artificial Intelligence, Robot, Fuzzy Systems, Autonomous Vehicles and Autonomous Weapon Systems ethics are implemented in order to make a machine work without intervening and harming others. This survey presents many previous works in this field of computer ethics with respect to artificial intelligence, robot weaponry, fuzzy systems and autonomous vehicles. The paper discusses the different ethics and scenarios up through the current technological advancements and summarizes the advantages and disadvantages of the different ethics and needs of morality. It is observed that all ethics are equally important today, but human control and responsibility matters. Most recent technology can be implemented or improved by careful observation and involvement of organizations like the United Nations, International Committee for Robot Arms Control, Geneva Conventions and so on.


2021 ◽  
Vol 35 (2) ◽  
pp. 245-272
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

AbstractThe notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems.


2021 ◽  
pp. 273-288
Author(s):  
M.L. Cummings

There has been increasing debate in the international community as to whether it is morally and ethically permissible to use autonomous weapons, which are weapon systems that select and fire upon a target with no human in the loop. Given the tightly coupled link between emerging technology and policy development in this debate that speaks to the very core of humanity, this chapter explains how current automated control systems, including weapons systems, are designed in terms of balancing authority between the human and the computer. The distinction between automated and autonomous systems is explained, and a framework is presented for conceptualizing the human-computer balance for future autonomous systems, both civilian and military. Lastly, specific technology and policy implications for weaponized autonomous systems are discussed.


2020 ◽  
Author(s):  
Marc Canellas ◽  
Rachel Haga

CITE AS: M. C. Canellas and R. A. Haga, "Toward meaningful human control of autonomous weapons systems through function allocation," 2015 IEEE International Symposium on Technology and Society (ISTAS), Dublin, 2015, pp. 1-7. doi: 10.1109/ISTAS.2015.7439432 One of the few convergent themes during the first two United Nations Meeting of Experts on autonomous weapons systems (AWS) in 2014 and 2015 was the requirement that there be meaningful human control (MHC) of AWS. What exactly constitutes MHC, however, is still ill-defined. While multiple sets of definitions and analyses have been published and discussed, this work seeks to address two key issues with the current definitions: (1) they are inconsistent in what authorities and responsibilities of human and automated agents need to be regulated, and (2) they lack the specificity that would be required for designers to systemically integrate these restrictions into AWS designs. Given that MHC centers on the interaction of human and autonomous agents, we leverage the models and metrics of function allocation - the allocation of work between human and autonomous agents - to analyze and compare definitions of MHC and the definitions of AWS proposed by the U.S. Department of Defense. Specifically, we transform the definitions into function allocation form to model and compare the definitions, and then show how a mismatch between authority and responsibility in an exemplar military scenario can still plague the human-AWS interactions. In summary, this paper provides a starting point for future research to investigate the application of function allocation to the questions of MHC and more generally, the development of rules and standards for incorporating AWS into the law of armed conflict.


Sign in / Sign up

Export Citation Format

Share Document