Meaningful Human Control of Lethal Autonomous Weapon Systems: The CCW-Debate and Its Implications for VSD

2020 ◽  
Vol 39 (4) ◽  
pp. 36-51
Author(s):  
Thea Riebe ◽  
Stefka Schmid ◽  
Christian Reuter
Keyword(s):  
Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


Author(s):  
Steven Umbrello

AbstractThe international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic.


Author(s):  
Naresh Kshetri

Computer Ethics study has reached a point where Artificial Intelligence, Robot, Fuzzy Systems, Autonomous Vehicles and Autonomous Weapon Systems ethics are implemented in order to make a machine work without intervening and harming others. This survey presents many previous works in this field of computer ethics with respect to artificial intelligence, robot weaponry, fuzzy systems and autonomous vehicles. The paper discusses the different ethics and scenarios up through the current technological advancements and summarizes the advantages and disadvantages of the different ethics and needs of morality. It is observed that all ethics are equally important today, but human control and responsibility matters. Most recent technology can be implemented or improved by careful observation and involvement of organizations like the United Nations, International Committee for Robot Arms Control, Geneva Conventions and so on.


Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 527
Author(s):  
Austin Wyatt ◽  
Jai Galliott

The removal of direct human involvement from the decision to apply lethal force is at the core of the controversy surrounding autonomous weapon systems, as well as broader applications of artificial intelligence and related technologies to warfare. Far from purely a technical question of whether it is possible to remove soldiers from the ‘pointy end’ of combat, the emergence of autonomous weapon systems raises a range of serious ethical, legal, and practical challenges that remain largely unresolved by the international community. The international community has seized on the concept of ‘meaningful human control’. Meeting this standard will require doctrinal and operational, as well as technical, responses at the design stage. This paper focuses on the latter, considering how value sensitive design could assist in ensuring that autonomous systems remain under the meaningful control of humans. However, this article will also challenge the tendency to assume a universalist perspective when discussing value sensitive design. By drawing on previously unpublished quantitative data, this paper will critically examine how perspectives of key ethical considerations, including conceptions of meaningful human control, differ among policymakers and scholars in the Asia Pacific. Based on this analysis, this paper calls for the development of a more culturally inclusive form of value sensitive design and puts forward the basis of an empirically-based normative framework for guiding designers of autonomous systems.


Sign in / Sign up

Export Citation Format

Share Document