scholarly journals Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues

2020 ◽  
Vol 1 (4) ◽  
pp. 187-194
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

Abstract Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.

Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


2021 ◽  
Vol 5 (1) ◽  
pp. 53-72
Author(s):  
Elke Schwarz

In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.


Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


2015 ◽  
Vol 6 (2) ◽  
pp. 247-283 ◽  
Author(s):  
Jeroen van den Boogaard

Given the swift technologic development, it may be expected that the availability of the first truly autonomous weapons systems is fast approaching. Once they are deployed, these weapons will use artificial intelligence to select and attack targets without further human intervention. Autonomous weapons systems raise the question of whether they could comply with international humanitarian law. The principle of proportionality is sometimes cited as an important obstacle to the use of autonomous weapons systems in accordance with the law. This article assesses the question whether the rule on proportionality in attacks would preclude the legal use of autonomous weapons. It analyses aspects of the proportionality rule that would militate against the use of autonomous weapons systems and aspects that would appear to benefit the protection of the civilian population if such weapons systems were used. The article concludes that autonomous weapons are unable to make proportionality assessments on an operational or strategic level on their own, and that humans should not be expected to be completely absent from the battlefield in the near future.


Author(s):  
Steven J. Barela ◽  
Avery Plaw

The possibility of allowing a machine agency over killing human beings is a justifiably concerning development, particularly when we consider the challenge of accountability in the case of illegal or unethical employment of lethal force. We have already seen how key information can be hidden or contested by deploying authorities, in the case of lethal drone strikes, for example. Therefore, this chapter argues that any effective response to autonomous weapons systems (AWS) must be underpinned by a comprehensive transparency regime that is fed by robust and reliable reporting mechanisms. This chapter offers a three-part argument in favor of a robust transparency regime. Firstly, there is a preexisting transparency gap in the deployment of core weapon systems that would be automated (such as currently remote-operated UCAVs). Second, while the Pentagon has made initial plans for addressing moral, ethical, and legal issues raised against AWS, there remains a need for effective transparency measures. Third, transparency is vital to ensure that AWS are only used with traceable lines of accountability and within established parameters. Overall this chapter argues that there is an overwhelming interest and duty for actors to ensure robust, comprehensive transparency, and accountability mechanisms. The more aggressively AWS are used, the more rigorous these mechanisms should be.


2020 ◽  
Vol 2 (1) ◽  
pp. 1-15
Author(s):  
Sébastien Lafrance

AbstractThis paper explores various impacts of artificial intelligence (“AI”) on the law, and the practice of law more specifically, for example the use of predictive tools. The author also examines some of the innovations but also limits of AI in the context of the legal profession as well as some ethical and legal issues raised by the use and evolution of AI in the legal area.


2021 ◽  
Vol 35 (2) ◽  
pp. 245-272
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

AbstractThe notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems.


2015 ◽  
Vol 13 (5) ◽  
pp. 1399-1409 ◽  
Author(s):  
Peter Hudson ◽  
Rosalie Hudson ◽  
Jennifer Philip ◽  
Mark Boughey ◽  
Brian Kelly ◽  
...  

AbstractObjective:Despite the availability of palliative care in many countries, legalization of euthanasia and physician-assisted suicide (EAS) continues to be debated—particularly around ethical and legal issues—and the surrounding controversy shows no signs of abating. Responding to EAS requests is considered one of the most difficult healthcare responsibilities. In the present paper, we highlight some of the less frequently discussed practical implications for palliative care provision if EAS were to be legalized. Our aim was not to take an explicit anti-EAS stance or expand on findings from systematic reviews or philosophical and ethico-legal treatises, but rather to offer clinical perspectives and the potential pragmatic implications of legalized EAS for palliative care provision, patients and families, healthcare professionals, and the broader community.Method:We provide insights from our multidisciplinary clinical experience, coupled with those from various jurisdictions where EAS is, or has been, legalized.Results:We believe that these issues, many of which are encountered at the bedside, must be considered in detail so that the pragmatic implications of EAS can be comprehensively considered.Significance of Results:Increased resources and effort must be directed toward training, research, community engagement, and ensuring adequate resourcing for palliative care before further consideration is given to allocating resources for legalizing euthanasia and physician-assisted suicide.


Sign in / Sign up

Export Citation Format

Share Document