Allocation of Function, Adaptive Automation, and Fault Management

Author(s):  
Neville Moray ◽  
Toshiuki Inagaki ◽  
Makoto Itoh

Sheridan's “Levels of Automation” were explored in an experiment on fault management of a continuous process control task which included situation adaptive automation. Levels of automation with more or less automation autonomy, and different levels of advice to the operator were compared, with automatic diagnosis whose reliability varied. The efficiency of process control and of fault management were explored under human control and automation in fault management, and aspects of the task in which human or automation were the more efficient defined. The results are related to earlier work on trust and self confidence in allocation of function by Lee, Moray, and Muir.

Author(s):  
David B. Kaber ◽  
Mica R. Endsley

Human out-of-the-loop (OOTL) performance problems in overseeing automated systems have motivated an interest in the use of intermediate levels of automation and adaptive automation (AA) as methods for improving the efficiency of operators when working with systems. In this paper, we discuss the current state of research into level of automation (LOA) and AA in complex, dynamic control systems. Different levels of automation and taxonomies of LOA, as well as strategies to AA, are identified. Empirical studies independently demonstrating the effectiveness of LOA and AA for combating negative consequences of OOTL performance and improving human functioning are reviewed. On the basis of these studies, the need to assess the combined effectiveness of LOA and AA in a dynamic control task is established. An experiment is presented which addresses this need. Thirty university students performed the control task and a secondary monitoring task with various levels of automation (varying degrees of computer assistance) and AA (varying durations of assistance) being applied to the former. Testing involved five levels of automation allocated over three different cycle times. Results suggest that LOA and AA are not additive in their effect on automated system functioning, however, each affects performance in very different ways. Level of automation was found to have a significant effect on functioning with the dynamic control task, while AA was found to significantly affect performance on the secondary monitoring task indicating an impact of operator workload. The results of this study are directly applicable to automated system design decisions regarding which system functions should be performed by a human operator or computer controller and for how long performance should occur.


Author(s):  
Daniela Miele ◽  
James Ferraro ◽  
Mustapha Mouloua

The goal of this study was to empirically examine the relationship between individuals’ reported trust in automated driving features and their level of self-confidence when driving. This study utilized a series of vignettes to depict three different levels of automation in accordance with the SAE International levels of Automation. The three levels portrayed low (level 1), moderate (level3), and high (level 5) functioning autonomous driving features. A driving self-efficacy scale and trust in automation scale were utilized to collect data about individuals attitudes towards the automation. It was hypothesized that self-confidence and level of automation would be significantly related to operator’s trust. In addition to this, it was also hypothesized that the level of automation would significantly affect the amount of trust placed in the autonomous features. Results indicated that there are significant relationships between self-confidence and trust, as well as level of automation and trust.


2011 ◽  
Vol 5 (2) ◽  
pp. 209-231 ◽  
Author(s):  
Ewart de Visser ◽  
Raja Parasuraman

In many emerging civilian and military operations, human operators are increasingly being tasked to supervise multiple robotic uninhabited vehicles (UVs) with the support of automation. As 100% automation reliability cannot be assured, it is important to understand the effects of automation imperfection on performance. In addition, adaptive aiding may help counter any adverse effects of static (fixed) automation. Using a high-fidelity multi-UV simulation involving both air and ground vehicles, two experiments examined the effects of automation reliability and adaptive automation on human-system performance with different levels of task load. In Experiment 1, participants performed a reconnaissance mission while assisted with an automatic target recognition (ATR) system whose reliability was low, medium, or high. Overall human-robot team performance was higher than with either human or ATR performance alone. In Experiment 2, participants performed a similar reconnaissance mission with no ATR, static automation, or with adaptive automation keyed to task load. Participant trust and self-confidence were higher and workload was lower for adaptive automation compared with the other conditions. The results show that human-robot teams can benefit from imperfect static automation even in high task load conditions and that adaptive automation can provide additional benefits in trust and workload.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Irene La Fratta ◽  
Sara Franceschelli ◽  
Lorenza Speranza ◽  
Antonia Patruno ◽  
Carlo Michetti ◽  
...  

AbstractIt is well known that soccer sport has the potential for high levels of stress and anxiety and that these are linked to Cortisol (C) variations. To date, much research has been devoted to understanding how Oxytocin (OT) can affect anxiety in response to a challenge. The aim of this study was to investigate, in 56 young male soccer players, the psychophysiological stress response 96 and 24 h before one soccer match of a tournament, in order to establish whether athletes who won or lost, show different levels of C and OT or expressions of competitive state anxiety subcomponents. We found that winners had significantly lower Cognitive anxiety and higher Self-confidence scores than losers. Also, significant differences between winners and losers in C and OT concentrations were observed, with higher OT levels in who has won and higher C levels in who has lost. Our results showed interesting associations between OT, C, anxiety feelings, and the outcome of competition.


Author(s):  
Wyatt McManus ◽  
Jing Chen

Modern surface transportation vehicles often include different levels of automation. Higher automation levels have the potential to impact surface transportation in unforeseen ways. For example, connected vehicles with higher levels of automation are at a higher risk for hacking attempts, because automated driving assistance systems often rely on onboard sensors and internet connectivity (Amoozadeh et al., 2015). As the automation level of vehicle control rises, it is necessary to examine the effect different levels of automation have on the driver-vehicle interactions. While research into the effect of automation level on driver-vehicle interactions is growing, research into how automation level affects driver’s responses to vehicle hacking attempts is very limited. In addition, auditory warnings have been shown to effectively attract a driver’s attention while performing a driving task, which is often visually demanding (Baldwin, 2011; Petermeijer, Doubek, & de Winter, 2017). An auditory warning can be either speech-based containing sematic information (e.g., “car in blind spot”) or non-sematic (e.g., a tone, or an earcon), which can influence driver behaviors differently (Sabic, Mishler, Chen, & Hu, 2017). The purpose of the current study was to examine the effect of level of automation and warning type on driver responses to novel critical events, using vehicle hacking attempts as a concrete example, in a driving simulator. The current study compared how level of automation (manual vs. automated) and warning type (non-semantic vs. semantic) affected drivers’ responses to a vehicle hacking attempt using time to collision (TTC) values, maximum steering wheel angle, number of successful responses, and other measures of response. A full factorial between-subjects design with the two factors made four conditions (Manual Semantic, Manual Non-Semantic, Automated Semantic, and Automated Non-Semantic). Seventy-two participants recruited using SONA ( odupsychology.sona-systems.com ) completed two simulated drives to school in a driving simulator. The first drive ended with the participant safely arriving at school. A two-second warning was presented to the participants three quarters of the way through the second drive and was immediately followed by a simulated vehicle hacking attempt. The warning either stated “Danger, hacking attempt incoming” in the semantic conditions or was a 500 Hz sine tone in the non-semantic conditions. The hacking attempt lasted five seconds before simulating a crash into a vehicle and ending the simulation if no intervention by the driver occurred. Our results revealed no significant effect of level of automation or warning type on TTC or successful response rate. However, there was a significant effect of level of automation on maximum steering wheel angle. This is a measure of response quality (Shen & Neyens, 2017), such that manual drivers had safer responses to the hacking attempt with smaller maximum steering wheel angles. In addition, an effect of warning type that approached significance was also found for maximum steering wheel angle such that participants who received a semantic warning had more severe and dangerous responses to the hacking attempt. The TTC and successful response results from the current experiment do not match those in the previous literature. The null results were potentially due to the warning implementation time and the complexity of the vehicle hacking attempt. In contrast, the maximum steering wheel angle results indicated that level of automation and warning type affected the safety and severity of the participants’ responses to the vehicle hacking attempt. This suggests that both factors may influence responses to hacking attempts in some capacity. Further research will be required to determine if level of automation and warning type affect participants ability to safely respond to vehicle hacking attempts. Acknowledgments. We are grateful to Scott Mishler for his assistance with STISIM programming and Faye Wakefield, Hannah Smith, and Pettie Perkins for their assistance in data collection.


Sign in / Sign up

Export Citation Format

Share Document