military robots
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 8)

H-INDEX

8
(FIVE YEARS 0)

Author(s):  
Racquel D. Brown-Gaston ◽  
Anshu Saxena Arora

The United States Department of Defense (DoD) designs, constructs, and deploys social and autonomous robots and robotic weapons systems. Military robots are designed to follow the rules and conduct of the professions or roles they emulate, and it is expected that ethical principles are applied and aligned with such roles. The application of these principles appear paramount during the COVID-19 global pandemic, wherein substitute technologies are crucial in carrying out duties as humans are more restrained due to safety restrictions. This article seeks to examine the ethical implications of the utilization of military robots. The research assesses ethical challenges faced by the United States DoD regarding the use of social and autonomous robots in the military. The authors provide a summary of the current status of these lethal autonomous and social military robots, ethical and moral issues related to their design and deployment, a discussion of policies, and the call for an international discourse on appropriate governance of such systems.


2021 ◽  
Vol 4 ◽  
pp. 3-7
Author(s):  
Ildar R. Begishev ◽  

The article deals with the criminal-legal aspect of countering the illegal trafficking of autonomous military robots. Given the pace of the introduction of digital technologies in the military sphere, it is possible that in the near future autonomous military robots will become a reality. This, in turn, dictates the need, in order to maintain peace and preserve the security of humanity, in a fairly short time to consolidate the legal regulation of relations in the military sphere with the use of autonomous military robots both at the international and domestic levels. The most important decision in this area may be the creation and adoption of a Convention on the prohibition of the development, Production and Use of autonomous military robots, their components (modules).


2020 ◽  
Vol 248 (3312) ◽  
pp. 14
Author(s):  
David Hambling
Keyword(s):  

2019 ◽  
pp. 170-177
Author(s):  
Pol Trias ◽  
Alex Bordanova

On March 23, 2018, the exhibition "Design Does* for better and for worse" opened at the Museu del Disseny de Barcelona (Design Museum). The exhibition was steered through questions that provoked discussion about the roles and responsibilities of design in our society. It featured open-ended questions, with neither a right nor a wrong answer. Questions such as: should everything be automated? With this question Design Does* presented the piece Death Inc., a reflection on design applied to the arms industry. The exploration of a new style of killing that arises from the incorporation of image recognition technologies to military robots. The death of a human by a machine’s hands and decision.


Author(s):  
Alexis R. Neigel ◽  
Gabriella M. Hancock

The chapter discusses the ergonomic and human factors issues surrounding life and death in terms of 21st century design. In this chapter, the authors describe how current limitations in technologies that are specifically designed to be lethal afford greater pain and suffering than necessary. As human factors is a science dedicated to improving the quality of life, it is necessary to critically examine the end-of-life domain, which is an area of research that has been largely neglected by ergonomic practitioners. By providing an overview of the current research in several area including euthanasia, remotely executed lethal operations, and fully autonomous military robots, the authors demonstrate the need to consider morality and ethics in the design process.


2019 ◽  
pp. 394-411
Author(s):  
Lambèr Royakkers ◽  
Peter Olsthoorn

Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this chapter, the authors critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) autonomous lethal military robots. They start by looking at military commanders because they are the ones with whom responsibility normally lies. The authors argue that this is typically still the case when lethal robots kill wrongly – even if these robots act autonomously. Nonetheless, they next look into the possible moral responsibility of the actors at the beginning and the end of the causal chain: those who design and manufacture armed military robots, and those who, far from the battlefield, remotely control them.


2019 ◽  
Vol 29 (3) ◽  
pp. 231-246

The article is a contribution to the ethical discussion of autonomous lethal weapons. The emergence of military robots acting independently on the battlefield is seen as an inevitable stage in the development of modern warfare because they will provide a critical advantage to an army. Even though there are already some social movements calling for a ban on “killer robots,” ethical arguments in favor of developing those technologies also exist. In particular, the utilitarian tradition may find that military robots are ethically permissible if “non-human combat” would minimize the number of human victims. A deontological analysis for its part might suggest that ethics is impossible without an ethical subject. Immanuel Kant’s ethical philosophy would accommodate the intuition that there is a significant difference between a situation in which a person makes a decision to kill another person and a situation in which a machine makes such a decision. Like animals, robots become borderline agents on the edges of “moral communities.” Using the discussion of animal rights, we see how Kant’s ethics operates with non-human agents. The key problem in the use of autonomous weapons is the transformation of war and the unpredictable risks associated with blurring the distinction between war and police work. The hypothesis of the article is that robots would not need to kill anyone to defeat the enemy. If no one dies in a war, then there is no reason not to extend its operations to non-combatants or to sue for peace. The analysis presented by utilitarianism overlooks the possibility of such consequences. The main problem of autonomous lethal weapons is their autonomy and not their potential to be lethal.


2018 ◽  
pp. 339-349 ◽  
Author(s):  
Ulrike Esther Franke
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document