scholarly journals Levels of Automation in Human Factors Models for Automation Design: Why We Might Consider Throwing the Baby Out With the Bathwater

2017 ◽  
Vol 12 (1) ◽  
pp. 42-49 ◽  
Author(s):  
Greg A. Jamieson ◽  
Gyrd Skraaning

This paper responds to Kaber’s reflections on the empirical grounding and design utility of the levels-of-automation (LOA) framework. We discuss the suitability of the existing human performance data for supporting design decisions in complex work environments. We question why human factors design guidance seems wedded to a model of questionable predictive value. We challenge the belief that LOA frameworks offer useful input to the design and operation of highly automated systems. Finally, we seek to expand the design space for human–automation interaction beyond the familiar human factors constructs. Taken together, our positions paint LOA frameworks as abstractions suffering a crisis of confidence that Kaber’s remedies cannot restore.

1986 ◽  
Vol 30 (8) ◽  
pp. 771-775 ◽  
Author(s):  
Gary A. Klein ◽  
Christopher P. Brezovic

The types of human perception and performance information that training device designers need in making design decisions were studied to identify the types of human performance data needed to make these decisions. A total of 50 experienced designers were studied. For a subset of 39 of these designers, the interviews focused on critical design decisions where there was a need for human perception and performance data. The utility of the sources used in the decision was assessed and showed the present technical literature database of little value in the problem solving of the designers. The data collected indicated systematic decision making strategies were used in a minority of cases. Instead, there was a heavy reliance on informal experiments and analogous cases for guidance in resolving design questions, The implications are that human factors specialists can have a stronger influence on design through identification of analogous cases, and participating in prototype studies than by identifying basic research findings.


Author(s):  
Kristopher Korbelak ◽  
Jeffrey Dressel ◽  
David Band ◽  
Jennifer Blanchard

Automated systems are not only commonplace but often are a necessity to complete highly specialized tasks across many operational environments. The Transportation Security Administration (TSA) aims to enhance human performance and increase safety through the acquisition and implementation of various types of automated systems. The Human Performance Branch (HPB) at TSA supports this aim through research on human factors that influence interactions with automation. Knowledge gained from HPB efforts informs TSA of the automated systems that will best suit worker needs, how to integrate these systems into the general workflow, and the relevant human factors that will support proper system use and, ultimately, enhance human performance. This discussion panel reviews a theoretical framework the TSA can use to guide assessment of multiple drivers of human performance in a consistent and standardized fashion as well as several TSA projects investigating three categories of human factors known to influence performance with automation – human (i.e., individual differences, cognitive constraints), context (e.g., organizational influence, environment), and system characteristics (e.g., type of automation) – and how those factors can be accounted for in the operational environment.


Author(s):  
Alan D. Swain

Quantification of human performance in man–machine systems is receiving more and more attention in human factors work. Obstacles to such quantification include: (1) complexity and subjectivity of available quantification methods, (2) grossness of assumptions behind these methods, and (3) resistance of some psychologists. Research is needed (1) to develop an improved human performance data bank, (2) to develop improved models and methods, and (3) to validate quantification data, models and methods. Some research is being done in these areas.


Author(s):  
Michael E. Watson ◽  
Christina F. Rusnock ◽  
Michael E. Miller ◽  
John M. Colombi

Humans perform critical functions in nearly every system, making them vital to consider during system development. Human Systems Integration (HSI) would ideally permit the human’s impact on system performance to be effectively accounted for during the systems engineering (SE) process, but effective processes are often not applied, especially in the early design phases. Failure to properly account for human capabilities and limitations during system design may lead to unreasonable expectations of the human. The result is a system design that makes unrealistic assumptions about the human, leading to an overestimation of the human’s performance and thus the system’s performance. This research proposes a method of integrating HSI with SE that allows human factors engineers to apply Systems Modeling Language (SysML) and human performance simulation to describe and communicate human and system performance. Using these models, systems engineers can more fully understand the system’s performance to facilitate design decisions that account for the human. A scenario is applied to illustrate the method, in which a system developer seeks to redesign an example system, Vigilant Spirit, by incorporating system automation to improve overall system performance. The example begins by performing a task analysis through physical observation and analysis of human subjects’ data from 12 participants employing Vigilant Spirit. This analysis is depicted in SysML Activity and Sequence Diagrams. A human-in-the-loop experiment is used to study performance and workload effects of humans applying Vigilant Spirit to conduct simulated remotely-piloted aircraft surveillance and tracking missions. The results of the task analysis and human performance data gathered from the experiment are used to build a human performance model in the Improved Performance Research Integration Tool (IMPRINT). IMPRINT allows the analyst to represent a mission in terms of functions and tasks performed by the system and human, and then run a discrete event simulation of the system and human accomplishing the mission to observe the effects of defined variables on performance and workload. The model was validated against performance data from the human-subjects’ experiment. In the scenario, six different scan algorithms, which varied in terms of scan accuracy and speed, were simulated. These algorithms represented different potential system trades as factors such as various technologies and hardware architectures could influence algorithm accuracy and speed. These automation trades were incorporated into the system’s block definition (BDD), requirements, and parametric SysML diagrams. These diagrams were modeled from a systems engineer’s perspective; therefore they originally placed less emphasis on the human. The BDD portrayed the structural aspect of Vigilant Spirit, to include the operator, automation, and system software. The requirements diagram levied a minimum system-level performance requirement. The parametric diagram further defined the performance and specification requirements, along with the automation’s scan settings, through the use of constraints. It was unclear from studying the SysML diagrams which automation setting would produce the best results, or if any could meet the performance requirement. Existing system models were insufficient by themselves to evaluate these trades; thus, IMPRINT was used to perform a trade study to determine the effects of each of the automation options on overall system performance. The results of the trade study revealed that all six automation conditions significantly improved performance scores from the baseline, but only two significantly improved workload. Once the trade study identified the preferred alternative, the results were integrated into existing system diagrams. Originally system-focused, SysML diagrams were updated to reflect the results of the trade analysis. The result is a set of integrated diagrams that accounts for both the system and human, which may then be used to better inform system design. Using human performance- and workload-modeling tools such as IMPRINT to perform tradeoff analyses, human factors engineers can attain data about the human subsystem early in system design. These data may then be integrated into existing SysML diagrams applied by systems engineers. In so doing, additional insights into the whole system can be gained that would not be possible if human factors and systems engineers worked independently. Thus, the human is incorporated into the system’s design and the total system performance may be predicted, achieving a successful HSI process.


Author(s):  
Fjollë Novakazi ◽  
Mikael Johansson ◽  
Helena Strömberg ◽  
MariAnne Karlsson

Extant levels of automation (LoAs) taxonomies describe variations in function allocations between the driver and the driving automation system (DAS) from a technical perspective. However, these taxonomies miss important human factors issues and when design decisions are based on them, the resulting interaction design leaves users confused. Therefore, the aim of this paper is to describe how users perceive different DASs by eliciting insights from an empirical driving study facilitating a Wizard-of-Oz approach, where 20 participants were interviewed after experiencing systems on two different LoAs under real driving conditions. The findings show that participants talked about the DAS by describing different relationships and dependencies between three different elements: the context (traffic conditions, road types), the vehicle (abilities, limitations, vehicle operations), and the driver (control, attentional demand, interaction with displays and controls, operation of vehicle), each with associated aspects that indicate what users identify as relevant when describing a vehicle with automated systems. Based on these findings, a conceptual model is proposed by which designers can differentiate LoAs from a human-centric perspective and that can aid in the development of design guidelines for driving automation.


Author(s):  
Shane T. Mueller ◽  
Lamia Alam ◽  
Gregory J. Funke ◽  
Anne Linja ◽  
Tauseef Ibne Mamun ◽  
...  

In many human performance tasks, researchers assess performance by measuring both accuracy and response time. A number of theoretical and practical approaches have been proposed to obtain a single performance value that combines these measures, with varying degrees of success. In this report, we examine data from a common paradigm used in applied human factors assessment: a go/no-go vigilance task (Smith et al., 2019). We examined whether 12 different measures of performance were sensitive to the vigilance decrement induced by the design, and also examined how the different measures were correlated. Results suggest that most combined measures were slight improvements over accuracy or response time alone, with the most sensitive and representative result coming from the Linear Ballistic Accumulator model. Practical lessons for applying these measures are discussed.


Data in Brief ◽  
2017 ◽  
Vol 15 ◽  
pp. 213-215 ◽  
Author(s):  
Mashrura Musharraf ◽  
Jennifer Smith ◽  
Faisal Khan ◽  
Brian Veitch ◽  
Scott MacKinnon

Sign in / Sign up

Export Citation Format

Share Document