Military Robots and the Question of Responsibility

2014 ◽  
Vol 5 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Lambèr Royakkers ◽  
Peter Olsthoorn

Most unmanned systems used in operations today are unarmed and mainly used for reconnaissance and mine clearing, yet the increase of the number of armed military robots is undeniable. The use of these robots raises some serious ethical questions. For instance: who can be held morally responsible in reason when a military robot is involved in an act of violence that would normally be described as a war crime? In this article, The authors critically assess the attribution of responsibility with respect to the deployment of both non-autonomous and non-learning autonomous lethal military robots. The authors will start by looking at the role of those with whom responsibility normally lies, the commanders. The authors argue that this is no different in the case of the above mentioned robots. After that, we will turn to those at the beginning and the end of the causal chain, respectively the manufacturers and designers, and the human operators who remotely control armed military robots from behind a computer screen.

2015 ◽  
pp. 2068-2082
Author(s):  
Lambèr Royakkers ◽  
Peter Olsthoorn

Most unmanned systems used in operations today are unarmed and mainly used for reconnaissance and mine clearing, yet the increase of the number of armed military robots is undeniable. The use of these robots raises some serious ethical questions. For instance: who can be held morally responsible in reason when a military robot is involved in an act of violence that would normally be described as a war crime? In this article, The authors critically assess the attribution of responsibility with respect to the deployment of both non-autonomous and non-learning autonomous lethal military robots. The authors will start by looking at the role of those with whom responsibility normally lies, the commanders. The authors argue that this is no different in the case of the above mentioned robots. After that, we will turn to those at the beginning and the end of the causal chain, respectively the manufacturers and designers, and the human operators who remotely control armed military robots from behind a computer screen.


2019 ◽  
pp. 394-411
Author(s):  
Lambèr Royakkers ◽  
Peter Olsthoorn

Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this chapter, the authors critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) autonomous lethal military robots. They start by looking at military commanders because they are the ones with whom responsibility normally lies. The authors argue that this is typically still the case when lethal robots kill wrongly – even if these robots act autonomously. Nonetheless, they next look into the possible moral responsibility of the actors at the beginning and the end of the causal chain: those who design and manufacture armed military robots, and those who, far from the battlefield, remotely control them.


Author(s):  
Lambèr Royakkers ◽  
Peter Olsthoorn

Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this chapter, the authors critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) autonomous lethal military robots. They start by looking at military commanders because they are the ones with whom responsibility normally lies. The authors argue that this is typically still the case when lethal robots kill wrongly – even if these robots act autonomously. Nonetheless, they next look into the possible moral responsibility of the actors at the beginning and the end of the causal chain: those who design and manufacture armed military robots, and those who, far from the battlefield, remotely control them.


Author(s):  
Mouad Bounouar ◽  
Richard Bearee ◽  
Ali Siadat ◽  
Tahar-Hakim Benchekroun

2017 ◽  
Vol 2 ◽  
pp. 51
Author(s):  
Iris Chuoying Ouyang ◽  
Sasha Spala ◽  
Elsi Kaiser

A production experiment was conducted to investigate the role of perspective-taking in the prosodic marking of information structure. Participants played an interactive game in which they produced verbal instructions that directed an addressee to place objects in locations on the computer screen. We manipulated (i) the participants’ assumptions about the addressee’s familiarity with the objects and (ii) the addressee’s accuracy in identifying the objects. F0 measurements of the participants’ utterances were analyzed with Smoothing-spline ANOVA models. We find that speakers’ expectations about the addressee’s knowledge state influence the prosodic realization of both new and given information, and that speakers rapidly update their expectations based on the addressee’s behavior during the conversation.


Risk Analysis ◽  
2017 ◽  
Vol 37 (12) ◽  
pp. 2334-2349 ◽  
Author(s):  
Laura N. Rickard ◽  
Z. Janet Yang ◽  
Jonathon P. Schuldt ◽  
Gina M. Eosco ◽  
Clifford W. Scherer ◽  
...  

2019 ◽  
pp. 1482-1499 ◽  
Author(s):  
Leanne Hirshfield ◽  
Philip Bobko ◽  
Alex J. Barelka ◽  
Mark R. Costa ◽  
Gregory J. Funke ◽  
...  

Despite the importance that human error in the cyber domain has had in recent reports, cyber warfare research to date has largely focused on the effects of cyber attacks on the target computer system. In contrast, there is little empirical work on the role of human operators during cyber breaches. More specifically, there is a need to understand the human-level factors at play when attacks occur. This paper views cyber attacks through the lens of suspicion, a construct that has been used in other contexts, but inadequately defined, in prior research. After defining the construct of suspicion, the authors demonstrate the role that suspicion plays as the conduit between computer operators' normal working behaviors and their ability to alter that behavior to detect and react to cyber attacks. With a focus on the user, rather than the target computer, the authors empirically develop a latent structure for a variety of types of cyber attacks, link that structure to levels of operator suspicion, link suspicion to users' cognitive and emotional states, and develop initial implications for cyber training.


Sign in / Sign up

Export Citation Format

Share Document