scholarly journals Human Performance Consequences of Automated Decision Aids

2012 ◽  
Vol 6 (1) ◽  
pp. 57-87 ◽  
Author(s):  
Dietrich Manzey ◽  
Juliane Reichenbach ◽  
Linda Onnasch

Two experiments are reported that investigate to what extent performance consequences of automated aids are dependent on the distribution of functions between human and automation and on the experience an operator has with an aid. In the first experiment, performance consequences of three automated aids for the support of a supervisory control task were compared. Aids differed in degree of automation (DOA). Compared with a manual control condition, primary and secondary task performance improved and subjective workload decreased with automation support, with effects dependent on DOA. Performance costs include return-to-manual performance issues that emerged for the most highly automated aid and effects of complacency and automation bias, respectively, which emerged independent of DOA. The second experiment specifically addresses how automation bias develops over time and how this development is affected by prior experience with the system. Results show that automation failures entail stronger effects than positive experience (reliably working aid). Furthermore, results suggest that commission errors in interaction with automated aids can depend on three sorts of automation bias effects: (a) withdrawal of attention in terms of incomplete cross-checking of information, (b) active discounting of contradictory system information, and (c) inattentive processing of contradictory information analog to a “looking-but-not-seeing” effect.

Author(s):  
Tobias Rieger ◽  
Dietrich Manzey

Objective The study addresses the impact of time pressure on human interactions with automated decision support systems (DSSs) and related performance consequences. Background When humans interact with DSSs, this often results in worse performance than could be expected from the automation alone. Previous research has suggested that time pressure might make a difference by leading humans to rely more on a DSS. Method In two laboratory experiments, participants performed a luggage screening task either manually, supported by a highly reliable DSS, or by a low reliable DSS. Time provided for inspecting the X-rays was 4.5 s versus 9 s varied within-subjects as the time pressure manipulation. Participants in the automation conditions were either shown the automation’s advice prior (Experiment 1) or following (Experiment 2) their own inspection, before they made their final decision. Results In Experiment 1, time pressure compromised performance independent of whether the task was performed manually or with automation support. In Experiment 2, the negative impact of time pressure was only found in the manual but not in the two automation conditions. However, neither experiment revealed any positive impact of time pressure on overall performance, and the joint performance of human and automation was mostly worse than the performance of the automation alone. Conclusion Time pressure compromises the quality of decision-making. Providing a DSS can reduce this effect, but only if the automation’s advice follows the assessment of the human. Application The study provides suggestions for the effective implementation of DSSs in addition to supporting concerns that highly reliable DSSs are not used optimally by human operators.


2019 ◽  
Vol 13 (4) ◽  
pp. 295-309 ◽  
Author(s):  
Mary Cummings ◽  
Lixiao Huang ◽  
Haibei Zhu ◽  
Daniel Finkelstein ◽  
Ran Wei

A common assumption across many industries is that inserting advanced autonomy can often replace humans for low-level tasks, with cost reduction benefits. However, humans are often only partially replaced and moved into a supervisory capacity with reduced training. It is not clear how this shift from human to automation control and subsequent training reduction influences human performance, errors, and a tendency toward automation bias. To this end, a study was conducted to determine whether adding autonomy and skipping skill-based training could influence performance in a supervisory control task. In the human-in-the-loop experiment, operators performed unmanned aerial vehicle (UAV) search tasks with varying degrees of autonomy and training. At the lowest level of autonomy, operators searched images and, at the highest level, an automated target recognition algorithm presented its best estimate of a possible target, occasionally incorrectly. Results were mixed, with search time not affected by skill-based training. However, novices with skill-based training and automated target search misclassified more targets, suggesting a propensity toward automation bias. More experienced operators had significantly fewer misclassifications when the autonomy erred. A descriptive machine learning model in the form of a hidden Markov model also provided new insights for improved training protocols and interventional technologies.


Author(s):  
Ian McCandliss ◽  
Kevin Zish ◽  
J. Malcolm McCurry ◽  
J. Gregory Trafton

This study examines the impact of prior experience on the adoption of automation in a supervisory control task. Automation is typically implemented as a means of reducing a person’s effort or involvement in a task. When automation is first introduced in a new product, the experience on the yet-to-be automated task is variable. Some users have experience with the task prior to the automation while others have little to no prior experience. Automation adoption between levels of experience was investigated in a mixed design study. One group was trained to use a manual version of a task before learning of an automated version. A second group was only trained to use the automated version of the task. The results of this study indicate that both training and experience are needed before users can make robust predictions about future automation adoption.


Author(s):  
Kelly Satterfield ◽  
Vincent F. Mancuso ◽  
Adam Strang ◽  
Eric Greenlee ◽  
Brent Miller ◽  
...  

Increases in cyber incidents have required substantial investments in cyber defense for national security. However, adversaries have begun moving away from traditional cyber tactics in order to escape detection by network defenders. The aim of some of these new types of attacks is not to steal information, but rather to create subtle inefficiencies that, when aggregated across a whole system, result in decreased system effectiveness. The aim of such attacks is to evade detection for long durations, allowing them to cause as much harm as possible. As a result, such attacks are sometimes referred to as “low and slow” (e.g., Mancuso et al., 2013). It is unknown how effective operators are likely to be at detecting and correctly diagnosing the symptoms of low and slow cyber attacks. Recent research by Hirshfield and colleagues (2015) suggests that the symptoms of the attack may need to be extreme in order to gain operator recognition. This calls into question the utility of relying on operators for detection altogether. Therefore, one goal for this research was to provide an initial exploration of attack deception and magnitude on operator behavior, performance, and potential detection of the attack. Operators in these systems are not passive observers, however, but active agents attempting to further their task goals. As a result, operators may alter their behavior in response to degraded system capabilities. This suggests that changes in the pattern and frequency of operator behavior following the inception of a cyber attack could potentially be used to detect its onset, even without the operator being fully aware of those changes (Mancuso et al., 2014). Similarly, since low and slow attacks are designed to degrade overall system effectiveness, performance measures of system efficiency, such as frequency and duration of tasks completed, may provide additional means to detect an ongoing cyber attack. As such, a second goal for the present research was to determine whether changes in operator behavior and system efficiency metrics could act as indicators of an active low and slow cyber attack. Participants in this experiment performed a multiunmanned aerial vehicle (UAV) supervisory control task. During the task, participant control over their UAVs was disrupted by a simulated cyber attack that caused affected UAVs to stop flying toward participant- selected destinations and enter an idle state. Aside from halting along their designated flight path, idled UAVs displayed no other indication of the cyber attack. The frequency of cyber attacks increased with time-on-task, such that attacks were relatively infrequent at the beginning of the task, occurring once in every five destination assignments made, and were ubiquitous by the end of the task, occurring after each destination assignment. Attack deception was manipulated with regard to participants’ approximate screen gaze location at the time of a cyber attack. In the overt condition, UAVs entered the idle state near the participant’s current focal area (indexed by the location of operator mouse interactions with the simulation), thereby providing some opportunity for operators to directly observe the effects of the cyber attack. In the covert condition, the attack occurred outside the operator’s current focal area, forcing them to rely on memory to detect the cyber attack. In the control condition, no cyber attacks occurred during the experiment. Following the UAV supervisory control task, participants were asked a series of debriefing questions to determine if they had noticed the UAV manipulation during the task. Most participants (approximately 64%) reported noticing the manipulation, but only after a series of questions prompting them to think of any problems they encountered during the task. The remaining participants reported noticing no errors during the task. Results regarding measures of performance and system efficiency indicated that performance decreased as the magnitude of the cyber attack increased. Measures of efficiency were calculated using fan-out (Olsen & Goodrich, 2003) which provided information regarding how many UAVs operators were able to control and how long UAVs were in an idle state during the trial. Operators controlled fewer vehicles, and vehicles sat idle for longer durations, as the magnitude of the cyber attack increased. However, these differences in efficiency were not statistically significantly different until relatively late in the trial. Overall, operators seemed insensitive to the presence of the cyber attack, only disclosing the problem after being prompted several times through guided questions by the experimenter. However, significant changes in operator behavior and system efficiency were observed as the magnitude of the cyber attack increased. These results demonstrate that subtle cyber attacks designed to slowly degrade human performance were measurable, but these changes were not apparent until late in the experiment when the attack was at its midpoint in magnitude. This experiment suggests that even though measurable changes in operator behavior may not occur until late in an attack, these metrics are more effective than reliance on operator detection.


Ergonomics ◽  
2009 ◽  
Vol 52 (5) ◽  
pp. 512-523 ◽  
Author(s):  
Stefan Röttger ◽  
Krisztina Bali ◽  
Dietrich Manzey

Sign in / Sign up

Export Citation Format

Share Document