scholarly journals Initiators and Triggering Conditions for Adaptive Automation in Advanced Small Modular Reactors

Author(s):  
Katya L. Le Blanc ◽  
Johanna H. Oxstrand

It is anticipated that Advanced Small Modular Reactors (AdvSMRs) will employ high degrees of automation. High levels of automation can enhance system performance, but often at the cost of reduced human performance. Automation can lead to human out-of the loop issues, unbalanced workload, complacency, and other problems if it is not designed properly. Researchers have proposed adaptive automation (defined as dynamic or flexible allocation of functions) as a way to get the benefits of higher levels of automation without the human performance costs. Adaptive automation has the potential to balance operator workload and enhance operator situation awareness by allocating functions to the operators in a way that is sensitive to overall workload and capabilities at the time of operation. However, there still a number of questions regarding how to effectively design adaptive automation to achieve that potential. One of those questions is related to how to initiate (or trigger) a shift in automation in order to provide maximal sensitivity to operator needs without introducing undesirable consequences (such as unpredictable mode changes). Several triggering mechanisms for shifts in adaptive automation have been proposed including: operator initiated, critical events, performance-based, physiological measurement, model-based, and hybrid methods. As part of a larger project to develop design guidance for human-automation collaboration in AdvSMRs, researchers at Idaho National Laboratory have investigated the effectiveness and applicability of each of these triggering mechanisms in the context of AdvSMR. Researchers reviewed the empirical literature on adaptive automation and assessed each triggering mechanism based on the human-system performance consequences of employing that mechanism. Researchers also assessed the practicality and feasibility of using the mechanism in the context of an AdvSMR control room. Results indicate that there are tradeoffs associated with each mechanism, but that some are more applicable to the AdvSMR domain than others. The two mechanisms that consistently improve performance in laboratory studies are operator initiated adaptive automation based on hierarchical task delegation and the Electroencephalogram (EEG)–based measure of engagement. Current EEG methods are intrusive and require intensive analysis; therefore it is not recommended for an AdvSMR control rooms at this time. Researchers also discuss limitations in the existing empirical literature and make recommendations for further research.

Author(s):  
Mica R. Endsley

As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human–autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human–automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human–autonomy interfaces are presented and directions for future research discussed.


2017 ◽  
Vol 12 (1) ◽  
pp. 29-34 ◽  
Author(s):  
Mica R. Endsley

The concept of different levels of automation (LOAs) has been pervasive in the automation literature since its introduction by Sheridan and Verplanck. LOA taxonomies have been very useful in guiding understanding of how automation affects human cognition and performance, with several practical and theoretical benefits. Over the past several decades a wide body of research has been conducted on the impact of various LOAs on human performance, workload, and situation awareness (SA). LOA has a significant effect on operator SA and level of engagement that helps to ameliorate out-of-the-loop performance problems. Together with other aspects of system design, including adaptive automation, granularity of control, and automation interface design, LOA is a fundamental design characteristic that determines the ability of operators to provide effective oversight and interaction with system autonomy. LOA research provides a solid foundation for guiding the creation of effective human–automation interaction, which is critical for the wide range of autonomous and semiautonomous systems currently being developed across many industries.


2017 ◽  
Vol 12 (1) ◽  
pp. 7-24 ◽  
Author(s):  
David B. Kaber

The current cognitive engineering literature includes a broad range of models of human–automation interaction (HAI) in complex systems. Some of these models characterize types and levels of automation (LOAs) and relate different LOAs to implications for human performance, workload, and situation awareness as bases for systems design. However, some have suggested that the LOAs approach has overlooked key issues that need to be considered during the design process. Others are simply unsatisfied with the current state of the art in modeling HAI. In this paper, I argue that abandoning an existing framework with some utility for design makes little sense unless the cognitive engineering community can provide the broader design community with other sound alternatives. On this basis, I summarize issues with existing definitions of LOAs, including (a) presumptions of human behavior with automation and (b) imprecision in defining behavioral constructs for assessment of automation. I propose steps for advances in LOA frameworks. I provide evidence of the need for precision in defining behavior in use of automation as well as a need for descriptive models of human performance with LOAs. I also provide a survey of other classes of HAI models, offering insights into ways to achieve descriptive formulations of taxonomies of LOAs to support conceptual and detailed systems design. The ultimate objective of this line of research is reliable models for predicting human and system performance to serve as a basis for design.


Author(s):  
Lawrence J. Hettinger ◽  
Bart J. Brickman ◽  
James McKinney

A user-centered design philosophy attaches primary importance to human-machine system performance as the key criterion in assessing the operational utility of complex systems. When the system under consideration is uniquely novel and emphasizes the use of relatively immature technologies, system validation must occur at a number of points in the design process. Particularly in these situations, human-system performance testing must inform engineering development throughout the entire design cycle, and not just at its conclusion. In this paper we describe an empirical effort designed to validate novel technical approaches to the design of a naval command center intended to support high levels of tactical performance in a severely reduced personnel environment. Using a human-in-the-loop simulation, we assessed the initial validity of our design concepts by measuring individual and team performance in realistic simulated tasks. By analyzing metrics associated with operational system performance, operator workload and situation awareness, we were able to identify functional aspects of the design, as well as those that needed further user-centered development


2003 ◽  
Author(s):  
Nathan R. Bailey ◽  
Mark W. Scerbo ◽  
Frederick G. Freeman ◽  
Peter J. Mikulka ◽  
Lorissa A. Scott

Author(s):  
Neville Moray ◽  
Toshiuki Inagaki ◽  
Makoto Itoh

Sheridan's “Levels of Automation” were explored in an experiment on fault management of a continuous process control task which included situation adaptive automation. Levels of automation with more or less automation autonomy, and different levels of advice to the operator were compared, with automatic diagnosis whose reliability varied. The efficiency of process control and of fault management were explored under human control and automation in fault management, and aspects of the task in which human or automation were the more efficient defined. The results are related to earlier work on trust and self confidence in allocation of function by Lee, Moray, and Muir.


1986 ◽  
Vol 30 (9) ◽  
pp. 905-907
Author(s):  
Robert M. Elton

The MANPRINT (Manpower and Personnel Integration) Program is a comprehensive program designed to enhance human performance and reliability during weapon system development with the overall goal – of optimizing total system performance. Total system performance is a function of equipment performance and human performance as they are affected under varying environmental conditions which includes physical, social and operational conditions. The challenge the U.S. Army has today is to ensure these issues are addressed early in and continuously throughout the design process.


Sign in / Sign up

Export Citation Format

Share Document