Legal evaluation of the attacks caused by artificial intelligence-based lethal weapon systems within the context of Rome statute

2021 ◽  
Vol 42 ◽  
pp. 105564
Author(s):  
Onur Sari ◽  
Sener Celik
Author(s):  
Ilse Verdiesen

Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of ‘control’ implies that one has the power to influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI  in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.


1986 ◽  
Vol 30 (6) ◽  
pp. 595-598
Author(s):  
J. Peter Kincaid

This symposium is a follow-up to a sumposium held at last year's HFS meeting. “Training Technology in the 1990s: Development, Application and Research Issues.” Representatives from the three military services discussed how many facets of training technology would affect current and future design applications and research issues relevant to military training systems. Two topics from that session (artificial intelligence and embedded training) and one other topic (computer-based authoring of technical information) have beer selected for in-depth discussion. Each technology is computer-based and has been exploited to only a limited degree. The object of this symposium is to provide a focus for describing how the three technologies are important for emerging and future training systems. For example, nearly all technical information (TI) for maintaining and operating weapon systems in the field is currently paper-based but the Department of Defense is committed to transitioning to electronic delivery of TI within the next decade. Many R&D issues must be resolved in the interim. Similarly, the technologies of embedded training and artificial intelligence have considerable potential for future training systems once a number of R&D issues are successfully addressed. All three services have on-going research and development programs for the technologies covered in this sumposium. Each topic is presented by representatives from at least two military behavioral laboratories: for computer-based authoring, Naval Training Systems Center (NTSC), Naval Personnel Research and Development Center (NPRDC) and Army Research Institute (ARI); for artificial intelligence, Air Force Human Resources Laboratory (AFHRL) and NTSC; and for embedded training, NTSC and ARI. The goals of the symposium are: (1) to make clearer the most pressing R&D issues associated with these technologies, and (2) to discuss how future training systems might incorporate them.


2019 ◽  
Vol 33 (02) ◽  
pp. 169-179 ◽  
Author(s):  
Amandeep Singh Gill

AbstractHow will emerging autonomous and intelligent systems affect the international landscape of power and coercion two decades from now? Will the world see a new set of artificial intelligence (AI) hegemons just as it saw a handful of nuclear powers for most of the twentieth century? Will autonomous weapon systems make conflict more likely or will states find ways to control proliferation and build deterrence, as they have done (fitfully) with nuclear weapons? And importantly, will multilateral forums find ways to engage the technology holders, states as well as industry, in norm setting and other forms of controlling the competition? The answers to these questions lie not only in the scope and spread of military applications of AI technologies but also in how pervasive their civilian applications will be. Just as civil nuclear energy and peaceful uses of outer space have cut into and often shaped discussions on nuclear weapons and missiles, the burgeoning uses of AI in consumer products and services, health, education, and public infrastructure will shape views on norm setting and arms control. New mechanisms for trust and confidence-building measures might be needed not only between China and the United States—the top competitors in comprehensive national strength today—but also among a larger group of AI players, including Canada, France, Germany, India, Israel, Japan, Russia, South Korea, and the United Kingdom.


Author(s):  
Naresh Kshetri

Computer Ethics study has reached a point where Artificial Intelligence, Robot, Fuzzy Systems, Autonomous Vehicles and Autonomous Weapon Systems ethics are implemented in order to make a machine work without intervening and harming others. This survey presents many previous works in this field of computer ethics with respect to artificial intelligence, robot weaponry, fuzzy systems and autonomous vehicles. The paper discusses the different ethics and scenarios up through the current technological advancements and summarizes the advantages and disadvantages of the different ethics and needs of morality. It is observed that all ethics are equally important today, but human control and responsibility matters. Most recent technology can be implemented or improved by careful observation and involvement of organizations like the United Nations, International Committee for Robot Arms Control, Geneva Conventions and so on.


Author(s):  
Dmitrii V. Bakhteev

The matter under research of the legal patterns of interaction between the society and individuals and artificial intelligence technologies. Elements of the matter under research is the technological grounds for functioning of artificial intelligence systems, potential risks and negative consequences of using this technology based on the example of intellectual processing personal data and autonomous vehicles and weapon systems, ethical and legal approaches to its regulation. Bakhteev analyzes approaches to describing positions of artificial intelligence systems and whether these systems have personalities and thus certain rights. The research is based on the method of modelling that is used to describe stages of ethical-legal research of artificial intelligence technology. The author also describes different kinds of responses of the society to the development of the aforesaid technology. The main conclusions of the research is the description of stages of artificial intelligence studies, in particular, analysis of the technology itself, associated risks and responses of the society and creation of ethical and then legal grounds for regulation of this technology. The author gives the results of the analysis of possible ethical-legal models of subjectivity of artificial intelligence systems from the point of view of the need and possibility to grant them certain rights. These models include instrumental, tolerant, xenophobic and empathetic. The author also states the main provisions of the code of ethics for developer and user of artificial intelligence systems. 


Author(s):  
MOJCA PEŠEC

The development of artificial intelligence will have a significant impact on international security and the use of a military instrument of power. One of the most important tasks for national security professionals and decision makers is thus to prepare for the repercussions of artificial intelligence development. In the development of military capabilities, artificial intelligence is integrated into intelligence, observation, control and reconnaissance applications, as well as into logistics, cyber operations, information operations, command and control systems, semi-autonomous and autonomous vehicles, and lethal autonomous weapon systems. The artificial intelligence revolution is not going to happen tomorrow. Therefore, pre-prepared policies and the knowledge shared by policy- and decision makers can help us manage the unknowns ahead. Ključne besede Artificial intelligence, national security, military instrument of power, military capabilities, decision-makers


2020 ◽  
Vol 38 (1) ◽  
pp. 36-42
Author(s):  
Jürgen Altmann

New military technologies are being developed at a high pace, with the USA in the lead. Intended application areas are space weapons and ballistic missile defence, hypersonic missiles, autonomous weapon systems, and cyber war. Generic technologies include artificial intelligence, additive manufacturing, synthetic biology and gene editing, and soldier enhancement. Problems for international security and peace - arms races and destabilisation - will likely result from properties shared by several technologies: wider availability, easier access, smaller systems; shorter times for attack, warning and decisions; and conventional-nuclear entanglement. Preventive arms control is urgently needed.


2020 ◽  
pp. 141-177
Author(s):  
Stanislav Abaimov ◽  
Maurizio Martellini

2020 ◽  
pp. 67-86
Author(s):  
Maria Saraiva

This article examines the more obscure dimensions of the use of Artificial Intelligence systems in Defense, with a particular focus on lethal autonomous weapon systems. Based on the need to regulate these disruptive technologies in military applications, this paper defends the preventive prohibition of these armaments and makes proposals for a global regulation of the use of Artificial Intelligence in military strategy. The article argues that autonomous systems aggravate the difficulties in managing the instruments of armed violence, which may undermine the foundations of strategy. It also defends the need to promote a global arms control architecture, taking into account that today it is already possible to use Artificial Intelligence applications in all military operational domains and that these are increasingly interrelated.


Sign in / Sign up

Export Citation Format

Share Document