THE REGULATION OF THE USE OF ARTIFICIAL INTELLIGENCE (AI) IN WARFARE: between International Humanitarian Law (IHL) and Meaningful Human Control

2021 ◽  
Vol 23 (129) ◽  
pp. 67
Author(s):  
Mateus De Oliveira Fornasier
Author(s):  
Simon McKenzie ◽  
Eve Massingham

Abstract The obligations of international humanitarian law are not limited to the attacker; the defender is also required to take steps to protect civilians from harm. The requirement to take precautions against the effects of attack requires the defender to minimize the risk that civilians and civilian objects will be harmed by enemy military operations. At its most basic, it obliges defenders to locate military installations away from civilians. Furthermore, where appropriate, the status of objects should be clearly marked. It is – somewhat counterintuitively – about making it easier for the attacker to select lawful targets by making visible the distinction between civilian objects and military objectives. The increasing importance of digital infrastructure to modern life may make complying with these precautionary obligations more complicated. Maintaining separation between military and civilian networks is challenging as both operate using at least some of the same infrastructure, relying on the same cables, systems, and electromagnetic spectrum. In addition, the speed at which operations against digital infrastructure can occur increases the difficulty of complying with the obligation – particularly if such operations involve a degree of automation or the use of artificial intelligence (ai). This paper sets out the source and extent of the obligation to take precautions against hostile military operations and considers how they might apply to digital infrastructure. As well as clarifying the extent of the obligation, it applies the obligation to take precautions against hostile military operations to digital infrastructure, giving examples of where systems designers are taking these obligations into account, and other examples of where they must.


2015 ◽  
Vol 6 (2) ◽  
pp. 247-283 ◽  
Author(s):  
Jeroen van den Boogaard

Given the swift technologic development, it may be expected that the availability of the first truly autonomous weapons systems is fast approaching. Once they are deployed, these weapons will use artificial intelligence to select and attack targets without further human intervention. Autonomous weapons systems raise the question of whether they could comply with international humanitarian law. The principle of proportionality is sometimes cited as an important obstacle to the use of autonomous weapons systems in accordance with the law. This article assesses the question whether the rule on proportionality in attacks would preclude the legal use of autonomous weapons. It analyses aspects of the proportionality rule that would militate against the use of autonomous weapons systems and aspects that would appear to benefit the protection of the civilian population if such weapons systems were used. The article concludes that autonomous weapons are unable to make proportionality assessments on an operational or strategic level on their own, and that humans should not be expected to be completely absent from the battlefield in the near future.


2021 ◽  
pp. 1-24
Author(s):  
Jonathan Kwik ◽  
Tom Van Engers

Under international law, weapon capabilities and their use are regulated by legal requirements set by International Humanitarian Law (IHL). Currently, there are strong military incentives to equip capabilities with increasingly advanced artificial intelligence (AI), which include opaque (less transparent) models. As opaque models sacrifice transparency for performance, it is necessary to examine whether their use remains in conformity with IHL obligations. First, we demonstrate that the incentives for automation drive AI toward complex task areas and dynamic and unstructured environments, which in turn necessitates resort to more opaque solutions. We subsequently discuss the ramifications of opaque models for foreseeability and explainability. Then, we analyse their impact on IHL requirements from a development, pre-deployment and post-deployment perspective. We find that while IHL does not regulate opaque AI directly, the lack of foreseeability and explainability frustrates the fulfilment of key IHL requirements to the extent that the use of fully opaque AI could violate international law. States are urged to implement interpretability during development and seriously consider the challenging complication of determining the appropriate balance between transparency and performance in their capabilities.


2021 ◽  
Vol 29 ◽  
Author(s):  
Fatima Roumate

The ethics of artificial intelligence is the response to a new dilemma that demands international society to provide a legal response to the many ethical challenges artificial intelligence creates. COVID-19 accelerates the use of AI in all countries and all fields. The pandemic is accelerating the transition to a society that is increasingly based on the use of, and reliance on, AI, and this also enhances the threats and creates new risks related to human rights. Artificial Intelligence (AI) influences human rights and international humanitarian law. This paper addresses international mechanisms and ethics as new rules which can ensure the protection of human rights in the age of AI. Two arguments are discussed in this study. Considering the ubiquitous and global reach of AI, the challenges it imposes requires an international legal oversight, a requirement that highlights the importance of ethical frameworks. In conclusion, the paper emphasizes how optimal action is needed to protect human rights in the age of AI. Rethinking international law and human rights and enhancing the ethical frameworks have thus become obligatory rather than a choice.


Author(s):  
Tim McFarland ◽  
Jai Galliott

The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these concerns, Meaningful Human Control has risen to prominence as a framing concept in the ongoing international debate. This chapter demonstrates how, in addition to the lack of a universally accepted precise definition, reliance on Meaningful Human Control is conceptually flawed. Overall, this chapter analyzes, problematizes, and explores the nebulous concept of Meaningful Human Control, and in doing so demonstrates that it relies on the mistaken premise that the development of autonomous capabilities in weapons systems constitutes a lack of human control that somehow presents an insurmountable challenge to existing International Humanitarian Law.


Author(s):  
Jason Scholz ◽  
Jai Galliott

For the use of force to be lawful and morally just, future autonomous systems must not commit humanitarian errors or acts of fratricide. To achieve this, we distinguish a novel preventative form of minimally-just autonomy using artificial intelligence (MinAI) to avert attacks on protected symbols, protected sites, and signals of surrender. MinAI compares favorably with respect to maximally-just forms proposed to date. We examine how fears of speculative artificial general intelligence has distracted resources from making current weapons more compliant with international humanitarian law, particularly Additional Protocol 1 of the Geneva Convention and its Article 36. Critics of our approach may argue that machine learning can be fooled, that combatants can commit perfidy to protect themselves, and so on. We confront this issue, including recent research on the subversion of AI, and conclude that the moral imperative for MinAI in weapons remains undiminished.


2021 ◽  
Vol 7 (Extra-C) ◽  
pp. 259-272
Author(s):  
Ksenia Michailovna Belikova ◽  
Maryam Abdurakhmanovna Akhmadova

This article aims at outlining the approaches of the international community, the BRICS countries, and other countries that have achieved noticeable results in the development and use of artificial intelligence in the military sphere to answer the questions about the applicability of the existing international humanitarian law in a military conflict using Lethal Autonomous Weapon Systems, and the extent of this applicability. Based on analytical reflections on information drawn from referenced sources, the authors analyze the provisions of national and international approaches, legislative instruments, and documents that create patterns for developing lethal autonomous weapon systems and the potential for the use thereof from the standpoint of legal attitudes. The authors’ results are presented in a set of approaches of national legal systems and doctrine provisions found in the current law in the field of research, including from the standpoint of a contribution to the further improvement of the concept of lethal autonomous weapon systems.


2020 ◽  
Author(s):  
Emily Crawford ◽  
Alison Pert

Sign in / Sign up

Export Citation Format

Share Document