scholarly journals An International Legal Consideration of the Issues on Lethal Autonomous Weapons Systems Including Nanomachine: A Perspective of International Law Studies in Japan

2020 ◽  
Author(s):  
Daniele Amoroso

Recent advances in robotics and AI have paved the way to robots autonomously performing a wide variety of tasks in ethically and legally sensitive domains. Among them, a prominent place is occupied by Autonomous Weapons Systems (or AWS), whose legality under international law is currently at the center of a heated academic and diplomatic debate. The AWS debate provides a uniquely representative sample of the (potentially) disruptive impact of new technologies on norms and principles of international law, in that it touches on key questions of international humanitarian law, international human rights law, international criminal law, and State responsibility. Against this backdrop, this book’s primary aim is to explore the international legal implications of autonomy in weapons systems, by inquiring what existing international law has to say in this respect, to what extent the persisting validity of its principles and categories is challenged, and what could be a way forward for future international regulation on the matter. From a broader perspective, the research carried out on the issue of the legality of AWS under international law aspires to offer some more general insights on the normative aspects of the shared control relationship between human decision-makers and artificial agents. Daniele Amoroso is Professor of International Law at the Law Department of the University of Cagliari and member of the International Committee for Robot Arms Control (ICRAC) contrattualistica internazionale” presso il Ministero del Commercio con l’Estero (ora Ministero dello Sviluppo Economico – Commercio Internazionale).


2018 ◽  
Vol 44 (3) ◽  
pp. 393-413 ◽  
Author(s):  
Ingvild Bode ◽  
Hendrik Huelss

AbstractAutonomous weapons systems (AWS) are emerging as key technologies of future warfare. So far, academic debate concentrates on the legal-ethical implications of AWS but these do not capture how AWS may shape norms through defining diverging standards of appropriateness in practice. In discussing AWS, the article formulates two critiques on constructivist models of norm emergence: first, constructivist approaches privilege the deliberative over the practical emergence of norms; and second, they overemphasise fundamental norms rather than also accounting for procedural norms, which we introduce in this article. Elaborating on these critiques allows us to respond to a significant gap in research: we examine how standards of procedural appropriateness emerging in the development and usage of AWS often contradict fundamental norms and public legitimacy expectations. Normative content may therefore be shaped procedurally, challenging conventional understandings of how norms are constructed and considered as relevant in International Relations. In this, we outline the contours of a research programme on the relationship of norms and AWS, arguing that AWS can have fundamental normative consequences by setting novel standards of appropriate action in international security policy.


2021 ◽  
pp. 237-258
Author(s):  
S. Kate Devitt

The rise of human-information systems, cybernetic systems, and increasingly autonomous systems requires the application of epistemic frameworks to machines and human-machine teams. This chapter discusses higher-order design principles to guide the design, evaluation, deployment, and iteration of Lethal Autonomous Weapons Systems (LAWS) based on epistemic models. Epistemology is the study of knowledge. Epistemic models consider the role of accuracy, likelihoods, beliefs, competencies, capabilities, context, and luck in the justification of actions and the attribution of knowledge. The aim is not to provide ethical justification for or against LAWS, but to illustrate how epistemological frameworks can be used in conjunction with moral apparatus to guide the design and deployment of future systems. The models discussed in this chapter aim to make Article 36 reviews of LAWS systematic, expedient, and evaluable. A Bayesian virtue epistemology is proposed to enable justified actions under uncertainty that meet the requirements of the Laws of Armed Conflict and International Humanitarian Law. Epistemic concepts can provide some of the apparatus to meet explainability and transparency requirements in the development, evaluation, deployment, and review of ethical AI.


Sign in / Sign up

Export Citation Format

Share Document