Modeling of Situation Awareness Supported by Advanced Flight Deck Displays

Author(s):  
Christopher D. Wickens ◽  
Angelia Sebok ◽  
Timothy Bagnall ◽  
Jill Kamienski

A two module computational model of situation awareness is presented. One module, characterizing stage 1 (noticing) SA is based on the SEEV model of selective attention in complex environments, and consists of components of Salience (capturing attention), Effort (inhibiting attention movement), Expectancy (for events along a channel) and Value (of attending those events). These are combined additively, and accurately predict visual scanning on the flight deck and in driving. The second module characterizing stage 2 (understanding) SA, results from the integration of noticed information, and its decay if unattended. We describe briefly the application and validation of the attention module to pilot scanning of the synthetic vision system display suite in aviation, and in more detail, the application to predicting differences in situation awareness supported by three formats of a wake vortex display, designed to alert aircraft pilots to dangers in the flight path ahead.

2005 ◽  
Author(s):  
Gloria L. Calhoun ◽  
Mark H. Draper ◽  
Michael F. Abernathy ◽  
Michael Patzek ◽  
Francisco Delgado

2004 ◽  
Author(s):  
Michael D. Byrne ◽  
Alex Kirlik ◽  
Michael D. Fleetwood ◽  
David G. Huss ◽  
Alex Kosorukoff ◽  
...  

Author(s):  
Miguel Lozano ◽  
Rafael Lucia ◽  
Fernando Barber ◽  
Fran Grimaldo ◽  
Antonio Lucas ◽  
...  

Author(s):  
Christopher D. Wickens ◽  
Linda Onnasch ◽  
Angelina Sebok ◽  
Dietrich Manzey

Objective The aim was to evaluate the relevance of the critique offered by Jamieson and Skraaning (2019) regarding the applicability of the lumberjack effect of human–automation interaction to complex real-world settings. Background The lumberjack effect, based upon a meta-analysis, identifies the consequences of a higher degree of automation—to improve performance and reduce workload—when automation functions as intended, but to degrade performance more, as mediated by a loss of situation awareness (SA) when automation fails. Jamieson and Skraaning provide data from a process control scenario that they assert contradicts the effect. Approach We analyzed key aspects of their simulation, measures, and results which we argue limit the strength of their conclusion that the lumberjack effect is not applicable to complex real-world systems. Results Our analysis revealed limits in their inappropriate choice of automation, the lack of a routine performance measure, support for the lumberjack effect that was actually provided by subjective measures of the operators, an inappropriate assessment of SA, and a possible limitation of statistical power. Conclusion We regard these limitations as reasons to temper the strong conclusions drawn by the authors, of no applicability of the lumberjack effect to complex environments. Their findings should be used as an impetus for conducting further research on human–automation interaction in these domains. Applications The collective findings of both Jamieson and Skraaning and our study are applicable to system designers and users in deciding upon the appropriate level of automation to deploy.


2002 ◽  
Author(s):  
Norah K. Link ◽  
Ronald V. Kruk ◽  
David McKay ◽  
Sion A. Jennings ◽  
Greg Craig

1999 ◽  
Author(s):  
Andrew K. Barrows ◽  
Keith W. Alter ◽  
Chad W. Jennings ◽  
J. D. Powell

2011 ◽  
Author(s):  
Hiroka Tsuda ◽  
Kohei Funabiki ◽  
Tomoko Iijima ◽  
Kazuho Tawada ◽  
Takashi Yoshida

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3802 ◽  
Author(s):  
Ahmed F. Fadhil ◽  
Raghuveer Kanneganti ◽  
Lalit Gupta ◽  
Henry Eberle ◽  
Ravi Vaidyanathan

Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.


Sign in / Sign up

Export Citation Format

Share Document