Lethal Autonomous Weapons
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 18)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780197546048, 9780197546079

2021 ◽  
pp. 237-258
Author(s):  
S. Kate Devitt

The rise of human-information systems, cybernetic systems, and increasingly autonomous systems requires the application of epistemic frameworks to machines and human-machine teams. This chapter discusses higher-order design principles to guide the design, evaluation, deployment, and iteration of Lethal Autonomous Weapons Systems (LAWS) based on epistemic models. Epistemology is the study of knowledge. Epistemic models consider the role of accuracy, likelihoods, beliefs, competencies, capabilities, context, and luck in the justification of actions and the attribution of knowledge. The aim is not to provide ethical justification for or against LAWS, but to illustrate how epistemological frameworks can be used in conjunction with moral apparatus to guide the design and deployment of future systems. The models discussed in this chapter aim to make Article 36 reviews of LAWS systematic, expedient, and evaluable. A Bayesian virtue epistemology is proposed to enable justified actions under uncertainty that meet the requirements of the Laws of Armed Conflict and International Humanitarian Law. Epistemic concepts can provide some of the apparatus to meet explainability and transparency requirements in the development, evaluation, deployment, and review of ethical AI.



Author(s):  
Duncan MacIntosh

Setting aside the military advantages offered by Autonomous Weapons Systems for a moment, international debate continues to feature the argument that the use of lethal force by “killer robots” inherently violates human dignity. The purpose of this chapter is to refute this assumption of inherent immorality and demonstrate situations in which deploying autonomous systems would be strategically, morally, and rationally appropriate. The second part of this chapter objects to the argument that the use of robots in warfare is somehow inherently offensive to human dignity. Overall, this chapter will demonstrate that, contrary to arguments made by some within civil society, moral employment of force is possible, even without proximate human decision-making. As discussions continue to swirl around autonomous weapons systems, it is important not to lose sight of the fact that fire-and-forget weapons are not morally exceptional or inherently evil. If an engagement complied with the established ethical framework, it is not inherently morally invalidated by the absence of a human at the point of violence. As this chapter argues, the decision to employ lethal force becomes problematic when a more thorough consideration would have demanded restraint. Assuming a legitimate target, therefore, the importance of the distance between human agency in the target authorization process and force delivery is separated by degrees. A morally justifiable decision to engage a target with rifle fire would not be ethically invalidated simply because the lethal force was delivered by a commander-authorized robotic carrier.



2021 ◽  
pp. 259-272
Author(s):  
Austin Wyatt ◽  
Jai Galliott

While the Conference on Certain Conventional Weapons (CCW)-sponsored process has steadily slowed, and occasionally stalled, over the past five years, the pace of technological development in both the civilian and military spheres has accelerated. In response, this chapter suggests the development of a normative framework that would establish common procedures and de-escalation channels between states within a given regional security cooperative prior to the demonstration point of truly autonomous weapon systems. Modeling itself on the Guidelines for Air Military Encounters and Guidelines Maritime Interaction, which were recently adopted by the Association of Southeast Asian Nations, the goal of this approach is to limit the destabilizing and escalatory potential of autonomous systems, which are expected to lower barriers to conflict and encourage brinkmanship while being difficult to definitively attribute. Overall, this chapter focuses on evaluating potential alternatives avenues to the CCW-sponsored process by which ethical, moral, and legal concerns raised by the emergence of autonomous weapon systems could be addressed.



2021 ◽  
pp. 159-172
Author(s):  
Donovan Phillips

This chapter considers how the adoption of autonomous weapons systems (AWS) may affect jus ad bellum principles of warfare. In particular, it focuses on the use of AWS in non-international armed conflicts (NIAC). Given the proliferation of NIAC, the development and use of AWS will most likely be attuned to this specific theater of war. As warfare waged by modernized liberal democracies (those most likely to develop and employ AWS at present) increasingly moves toward a model of individualized warfare, how, if at all, will the principles by which we measure the justness of the commencement of such hostilities be affected by the introduction of AWS, and how will such hostilities stack up to current legal agreements surrounding their more traditional engagement? This chapter claims that such considerations give us reason to question the moral and legal necessity of ad bellum proper authority.



2021 ◽  
pp. 103-120
Author(s):  
Natalia Jevglevskaja ◽  
Rain Liivoja

Disagreements about the humanitarian risk-benefit balance of weapons technology are not new. The history of arms control negotiations offers many examples of weaponry that was regarded ‘inhumane’ by some, while hailed by others as a means to reduce injury or suffering in conflict. The debate about autonomous weapons systems reflects this dynamic, yet also stands out in some respects, notably largely hypothetical nature of concerns raised in regard to these systems as well as ostensible disparities in States’ approaches to conceptualizing autonomy. This chapter considers how misconceptions surrounding autonomous weapons technology impede the progress of the deliberations of the Group of Governmental Experts on Lethal Autonomous Weapons Systems. An obvious tendency to focus on the perceived risks posed by these systems, much more so than potential operational and humanitarian advantages they offer, is likely to jeopardize the prospect of finding a meaningful resolution to the debate.



2021 ◽  
pp. 89-102
Author(s):  
Matthias Scheutz ◽  
Bertram F. Malle

In the future, artificial agents are likely to make life-and-death decisions about humans. Ordinary people are the likely arbiters of whether these decisions are morally acceptable. We summarize research on how ordinary people evaluate artificial (compared to human) agents that make life-and-death decisions. The results suggest that many people are inclined to morally evaluate artificial agents’ decisions, and when asked how the artificial and human agents should decide, they impose the same norms on them. However, when confronted with how the agents did in fact decide, people judge the artificial agents’ decisions differently from those of humans. This difference is best explained by justifications people grant the human agents (imagining their experience of the decision situation) but do not grant the artificial agent (whose experience they cannot imagine). If people fail to infer the decision processes and justifications of artificial agents, these agents will have to explicitly communicate such justifications to people, so they can understand and accept their decisions.



2021 ◽  
pp. 137-158
Author(s):  
Jai Galliott ◽  
Bianca Baggiarini ◽  
Sean Rupka

Combat automation, enabled by rapid technological advancements in artificial intelligence and machine learning, is a guiding principle in the conduct of war today. Yet, empirical data on the impact of algorithmic combat on military personnel remains limited. This chapter draws on data from a historically unprecedented survey of Australian Defence Force Academy cadets. Given that this generation of trainees will be the first to deploy autonomous systems (AS) in a systematic way, their views are especially important. This chapter focuses its analysis on five themes: the dynamics of human-machine teams; the perceived risks, benefits, and capabilities of AS; the changing nature of (and respect for) military labor and incentives; preferences to oversee a robot, versus carrying out a mission themselves; and the changing meaning of soldiering. We utilize the survey data to explore the interconnected consequences of neoliberal governing for cadets’ attitudes toward AS, and citizen-soldiering more broadly. Overall, this chapter argues that Australian cadets are open to working with and alongside AS, but under the right conditions. Armed forces, in an attempt to capitalize on these technologically savvy cadets, have shifted from institutional to occupational employers. However, in our concluding remarks, we caution against unchecked technological fetishism, highlighting the need to critically question the risks of AS on moral deskilling, and the application of market-based notions of freedom to the military domain.



2021 ◽  
pp. 175-188
Author(s):  
Alex Leveringhaus

This chapter considers how autonomous weapons systems (AWS) impact the armed conflicts of the future. Conceptually, the chapter argues that AWS should not be seen as on a par with precision weaponry, which makes them normatively problematic. Against this background, the chapter considers the relationship between AWS and two narratives, The Humane Warfare Narrative and the Excessive Risk Narrative, which have been used to theorize contemporary armed conflict. AWS, the chapter contends, are unlikely to usher in an era of humane warfare. Rather, they are likely to reinforce existing trends with regard to the imposition of excessive risk on noncombatants in armed conflict. Future conflicts in which AWS are deployed are thus likely to share many characteristics of the risk-transfer wars of the late twentieth and early twenty-first centuries. The chapter concludes by putting these abstract considerations to the test in the practical context of military intervention.



2021 ◽  
pp. 273-288
Author(s):  
M.L. Cummings

There has been increasing debate in the international community as to whether it is morally and ethically permissible to use autonomous weapons, which are weapon systems that select and fire upon a target with no human in the loop. Given the tightly coupled link between emerging technology and policy development in this debate that speaks to the very core of humanity, this chapter explains how current automated control systems, including weapons systems, are designed in terms of balancing authority between the human and the computer. The distinction between automated and autonomous systems is explained, and a framework is presented for conceptualizing the human-computer balance for future autonomous systems, both civilian and military. Lastly, specific technology and policy implications for weaponized autonomous systems are discussed.



2021 ◽  
pp. 219-236
Author(s):  
Armin Krishnan

This chapter argues that in many respects the regulation of Autonomous Weapons Systems (AWS) presents a similar challenge to arms control as biological weapons do and that many lessons learned from the Biological Weapons Convention (BWC) could be applied to the control of AWS. AWS that utilize “deep learning” are potentially unpredictable and uncontrollable weapons. International regulation efforts for AWS should focus on the development of safety and design standards for artificial intelligence (AI), should put in place confidence-building measures for enhancing transparency and trust in AI R&D and related applications, and should aim for a ban of offensive AWS. Enforced international transparency in the development of AI could make AI better and safer, including in a military context, which would improve strategic stability.



Sign in / Sign up

Export Citation Format

Share Document