Responsibility and War Machines

Author(s):  
Jai Galliott

The purpose of this chapter is to demonstrate that while unmanned systems certainly exacerbate some problems and cause us to rethink who we ought to hold morally responsible for military war crimes, traditional notions of responsibility are capable of dealing with the supposed ‘responsibility gap' in unmanned warfare and that more moderate regulation will perhaps prove more important than an outright ban. It begins by exploring the conditions under which responsibility is typically delegated to humans and how these responsibility requirements are challenged in technological warfare. Following this is an examination of Robert Sparrow's notion of a ‘responsibility gap' as it pertains to the deployment of fully autonomous weapons systems. It is argued that we can reach a solution by shifting to a forward-looking and functional sense of responsibility incorporating institutional agents and ensuring that the human role in engineering and unleashing these systems is never overlooked.

2021 ◽  
Vol 1 (21) ◽  
pp. 441
Author(s):  
Xavier J. Ramírez García de León

El desarrollo de prácticamente cualquier tecnología conlleva presiones sobre el marco legal que evidencian, por lo general, las dificultades que presenta el derecho para avanzar al mismo ritmo. En algunos casos, estas presiones adquieren una relevancia significativa, si no por su visibilidad, sí por las consecuencias que una deficiente regulación podría generar. Este artículo analiza, desde la perspectiva de la Corte Penal Internacional, el supuesto problema que implicaría la introducción de sistemas de armas autónomas al campo de batalla, particularmente en lo relativo al requisito de mens rea para crímenes de guerra.


2018 ◽  
Vol 44 (3) ◽  
pp. 393-413 ◽  
Author(s):  
Ingvild Bode ◽  
Hendrik Huelss

AbstractAutonomous weapons systems (AWS) are emerging as key technologies of future warfare. So far, academic debate concentrates on the legal-ethical implications of AWS but these do not capture how AWS may shape norms through defining diverging standards of appropriateness in practice. In discussing AWS, the article formulates two critiques on constructivist models of norm emergence: first, constructivist approaches privilege the deliberative over the practical emergence of norms; and second, they overemphasise fundamental norms rather than also accounting for procedural norms, which we introduce in this article. Elaborating on these critiques allows us to respond to a significant gap in research: we examine how standards of procedural appropriateness emerging in the development and usage of AWS often contradict fundamental norms and public legitimacy expectations. Normative content may therefore be shaped procedurally, challenging conventional understandings of how norms are constructed and considered as relevant in International Relations. In this, we outline the contours of a research programme on the relationship of norms and AWS, arguing that AWS can have fundamental normative consequences by setting novel standards of appropriate action in international security policy.


2021 ◽  
pp. 237-258
Author(s):  
S. Kate Devitt

The rise of human-information systems, cybernetic systems, and increasingly autonomous systems requires the application of epistemic frameworks to machines and human-machine teams. This chapter discusses higher-order design principles to guide the design, evaluation, deployment, and iteration of Lethal Autonomous Weapons Systems (LAWS) based on epistemic models. Epistemology is the study of knowledge. Epistemic models consider the role of accuracy, likelihoods, beliefs, competencies, capabilities, context, and luck in the justification of actions and the attribution of knowledge. The aim is not to provide ethical justification for or against LAWS, but to illustrate how epistemological frameworks can be used in conjunction with moral apparatus to guide the design and deployment of future systems. The models discussed in this chapter aim to make Article 36 reviews of LAWS systematic, expedient, and evaluable. A Bayesian virtue epistemology is proposed to enable justified actions under uncertainty that meet the requirements of the Laws of Armed Conflict and International Humanitarian Law. Epistemic concepts can provide some of the apparatus to meet explainability and transparency requirements in the development, evaluation, deployment, and review of ethical AI.


Author(s):  
Peter Asaro

As the militaries of technologically advanced nations seek to apply increasingly sophisticated AI and automation to weapons technologies, a host of ethical, legal, social, and political questions arise. Central among these is whether it is ethical to delegate the decision to use lethal force to an autonomous system that is not under meaningful human control. Further questions arise as to who or what could or should be held responsible when lethal force is used improperly by such systems. This chapter argues that current autonomous weapons are not legal or moral agents that can be held morally responsible or legally accountable for their choices and actions, and that therefore humans need to maintain control over such weapons systems.


Sign in / Sign up

Export Citation Format

Share Document