Programming Precision? Requiring Robust Transparency for AWS

Author(s):  
Steven J. Barela ◽  
Avery Plaw

The possibility of allowing a machine agency over killing human beings is a justifiably concerning development, particularly when we consider the challenge of accountability in the case of illegal or unethical employment of lethal force. We have already seen how key information can be hidden or contested by deploying authorities, in the case of lethal drone strikes, for example. Therefore, this chapter argues that any effective response to autonomous weapons systems (AWS) must be underpinned by a comprehensive transparency regime that is fed by robust and reliable reporting mechanisms. This chapter offers a three-part argument in favor of a robust transparency regime. Firstly, there is a preexisting transparency gap in the deployment of core weapon systems that would be automated (such as currently remote-operated UCAVs). Second, while the Pentagon has made initial plans for addressing moral, ethical, and legal issues raised against AWS, there remains a need for effective transparency measures. Third, transparency is vital to ensure that AWS are only used with traceable lines of accountability and within established parameters. Overall this chapter argues that there is an overwhelming interest and duty for actors to ensure robust, comprehensive transparency, and accountability mechanisms. The more aggressively AWS are used, the more rigorous these mechanisms should be.

2020 ◽  
Vol 1 (4) ◽  
pp. 187-194
Author(s):  
Daniele Amoroso ◽  
Guglielmo Tamburrini

Abstract Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.


Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


Author(s):  
Duncan MacIntosh

Setting aside the military advantages offered by Autonomous Weapons Systems for a moment, international debate continues to feature the argument that the use of lethal force by “killer robots” inherently violates human dignity. The purpose of this chapter is to refute this assumption of inherent immorality and demonstrate situations in which deploying autonomous systems would be strategically, morally, and rationally appropriate. The second part of this chapter objects to the argument that the use of robots in warfare is somehow inherently offensive to human dignity. Overall, this chapter will demonstrate that, contrary to arguments made by some within civil society, moral employment of force is possible, even without proximate human decision-making. As discussions continue to swirl around autonomous weapons systems, it is important not to lose sight of the fact that fire-and-forget weapons are not morally exceptional or inherently evil. If an engagement complied with the established ethical framework, it is not inherently morally invalidated by the absence of a human at the point of violence. As this chapter argues, the decision to employ lethal force becomes problematic when a more thorough consideration would have demanded restraint. Assuming a legitimate target, therefore, the importance of the distance between human agency in the target authorization process and force delivery is separated by degrees. A morally justifiable decision to engage a target with rifle fire would not be ethically invalidated simply because the lethal force was delivered by a commander-authorized robotic carrier.


Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


Author(s):  
Boothby William H

Chapter 14 discusses specific weapon technologies and types of munition that merit individual consideration. This may be because of concerns that have been expressed as to their characteristics, because the technologies themselves require, or appear to some to require, particular legal care, or because they are emerging technologies which raise interesting and/or novel legal issues. The purpose of this consideration is to show how weapons law should be applied to each such technology, with the ultimate aim of assisting weapon reviewers in their difficult task. The chapter addresses missiles, bombs, and artillery, blast weapons, directed energy weapons, herbicides, flechettes, depleted uranium, white phosphorus, non-lethal weapons, cyber weapons, autonomous weapons, counter-IED weapons, nanotechnology, and metamaterials.


Author(s):  
Christopher Coker

The debate about “killer robots” is rarely out of the news. Autonomous Weapons Systems may soon replace drones. Drone strikes, however, are also likely to increase in number. For the International Relations community the main debate is an ethical one: is the direction in which we are taking war likely to increase or decrease our moral competence to wage it? This chapter argues that we are not best placed to ask these questions, because we have an incomplete understanding of the extent to which our humanity is changing and what it means to be human. The latest contribution towards self-understanding made by Actor Network Theory (ANT), machine ethics, and socio-biology suggest we need to urgently reframe the questions we are asking.


Author(s):  
Deane-Peter Baker

The prospect of robotic warriors striding the battlefield has, somewhat unsurprisingly, been shaped by perceptions drawn from science fiction. While illustrative, such comparisons are largely unhelpful for those considering potential ethical implications of autonomous weapons systems. In this chapter, I offer two alternative sources for ethical comparison. Drawing from military history and current practice for guidance, this chapter highlights the parallels that make mercenaries—the ‘dogs of war’—and military working dogs—the actual dogs of war—useful lenses through which to consider Lethal Autonomous Weapons Systems—the robot dogs of war. Through these comparisons, I demonstrate that some of the most commonly raised ethical objections to autonomous weapon systems are overstated, misguided, or otherwise dependent on outside circumstance.


2019 ◽  
Vol 7 (3) ◽  
pp. 351-368
Author(s):  
Yordan Gunawan ◽  
Mohammad Haris Aulawi ◽  
Andi Rizal Ramadhan

AbstractWar and Technological development have been linked for centuries. States and military leaders have been searching for weapon systems that will minimize the risk for the soldier, as technology-enabled the destruction of combatants and non-combatants at levels not seen previously in human history. Autonomous Weapon Systems are not specifically regulated by IHL treaties. On the use of Autonomous Weapons Systems, there are three main principles that must be considered, namely principle of Distinction, Proportionality and Unnecessary Suffering. Autonomous weapon systems may provide a military advantage because those systems are able to operate free of human emotions and bias which cloud judgement. In addition, these weapon systems can operate free from the needs for self-preservation and are able to make decisions a lot quicker. Therefore, it is important to examine who, in this case, the commander can be held responsible when an Autonomous Weapon System will commit a crime.Keywords: Command Responsibility, Autonomous Weapons Systems, International Humanitarian Law AbstrakPerang dan perkembangan Teknologi telah dikaitkan selama berabad-abad. Para pemimpin negara dan militer telah mencari sistem senjata yang akan meminimalkan risiko bagi prajurit itu, karena teknologi memungkinkan penghancuran para pejuang dan non-pejuang pada tingkat yang tidak terlihat sebelumnya dalam sejarah manusia. Sistem Senjata Otonom tidak secara spesifik diatur oleh perjanjian IHL. Pada penggunaan Sistem Senjata Otonom, ada tiga prinsip utama yang harus diperhatikan, yaitu prinsip Perbedaan, Proportionalitas, dan Penderitaan yang Tidak Perlu. Sistem senjata otonom dapat memberikan keuntungan militer karena sistem tersebut dapat beroperasi bebas dari emosi manusia dan bias yang menghakimi. Selain itu, sistem senjata ini dapat beroperasi bebas dari kebutuhan untuk pelestarian diri dan mampu membuat keputusan lebih cepat. Oleh karena itu, penting untuk memeriksa siapa, dalam hal ini, komandan dapat bertanggung jawab ketika Sistem Senjata Otonom akan melakukan kejahatan.Kata kunci: Tanggung Jawab Komando, Sistem Senjata Otonom, Hukum Humaniter Internasional АннотацияВойна и развитие технологий были связаны на протяжении веков. Государственные и военные лидеры искали системы вооружений, которые минимизируют риски для солдат, потому что технология позволяет уничтожать боевиков и не боeвиков на уровне, невиданном ранее в истории человечества. Автономный Комплекс Вооружения конкретно не регулируется соглашением о МГП (Международное Гуманитарное Право). При использовании Автономного Комплекса Вооружения необходимо учитывать три основных принципа, а именно: принцип различия, пропорциональность и потери среди мирного населения. Автономный Комплекс Вооружения может обеспечить военные преимущества, поскольку он может функционировать без человеческих эмоций и субъективных предубеждений. Кроме того, эта система вооружения может работать без необходимости самосохранения и может принимать решения быстрее. Поэтому важно выяснить, кто, в этом случае, командир, может нести ответственность, когда Автономный Комплекс Вооружения совершит преступление. Ключевые слова: Командная ответственность, Автономный Комплекс Вооружения, Международное Гуманитарное Право 


Author(s):  
Tim McFarland ◽  
Jai Galliott

The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these concerns, Meaningful Human Control has risen to prominence as a framing concept in the ongoing international debate. This chapter demonstrates how, in addition to the lack of a universally accepted precise definition, reliance on Meaningful Human Control is conceptually flawed. Overall, this chapter analyzes, problematizes, and explores the nebulous concept of Meaningful Human Control, and in doing so demonstrates that it relies on the mistaken premise that the development of autonomous capabilities in weapons systems constitutes a lack of human control that somehow presents an insurmountable challenge to existing International Humanitarian Law.


The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapons systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapons Systems, and the great powers have largely refused to support an effective ban. In this vacuum, the public has been presented with a heavily one-sided view of “Killer Robots.” This volume presents a more nuanced approach to autonomous weapon systems that recognizes the need to progress beyond a discourse framed by the Terminator and HAL 9000. Reshaping the discussion around this emerging military innovation requires a new line of thought and a willingness to challenge the orthodoxy. Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare therefore focuses on exploring the moral and legal issues associated with the design, development, and deployment of lethal autonomous weapons. In this volume, we bring together some of the most prominent academics and academic-practitioners in the lethal autonomous weapons space and seek to return some balance to the debate. As part of this effort, we recognize that society needs to invest in hard conversations that tackle the ethics, morality, and law of these new digital technologies and understand the human role in their creation and operation.


Sign in / Sign up

Export Citation Format

Share Document