Flight deck information automation: A human-in-the loop in-trail procedure simulation study

Author(s):  
Emmanuel Letsu-Dake ◽  
William Rogers ◽  
Stephen D. Whitlow ◽  
Erik Nelson ◽  
Michael Dillard ◽  
...  
Author(s):  
Emmanuel Letsu-Dake ◽  
William Rogers ◽  
Stephen D. Whitlow ◽  
Erik Nelson ◽  
Michael Dillard ◽  
...  

Author(s):  
Timothy J. Etherington ◽  
Lynda J. Kramer ◽  
Laura Smith-Velazquez ◽  
Maarten Uijt de Haag
Keyword(s):  

Author(s):  
Kellie D. Kennedy ◽  
Chad L. Stephens ◽  
Ralph A. Williams ◽  
Paul C. Schutte

The study reported herein is a subset of a larger investigation on the role of automation in the context of the flight deck and used a fixed-based, human-in-the-loop simulator. This portion explored the relationship between automation and inattentional blindness (IB) occurrences in repeated induction using two types of runway incursions directly relevant to primary task performance. Sixty non-pilot participants performed the final five minutes of a landing scenario twice in one of three automation condition: full automation (FA), partial automation (PA), and no automation (NA). The first induction resulted in a 70% detection failure rate and the second induction resulted in a 50% detection failure rate. Detection improved in all conditions. IB group membership (IB vs. Detection) in the FA condition showed the most improvement and rated the Mental Demand and Effort subscales of the NASA-TLX significantly higher for Time 2 compared Time 1. Participants in the FA condition used the experience of IB exposure to reallocate attentional resources and improve task performance. These findings support the role of engagement in attention detriment and the consideration of attentional failure causation to select appropriate mitigation strategies.


2022 ◽  
Vol 100 ◽  
pp. 103670
Author(s):  
Richard J. Simonson ◽  
Joseph R. Keebler ◽  
Elizabeth L. Blickensderfer ◽  
Ron Besuijen

2006 ◽  
Vol 11 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Alexander von Eye

At the level of manifest categorical variables, a large number of coefficients and models for the examination of rater agreement has been proposed and used. The most popular of these is Cohen's κ. In this article, a new coefficient, κ s , is proposed as an alternative measure of rater agreement. Both κ and κ s allow researchers to determine whether agreement in groups of two or more raters is significantly beyond chance. Stouffer's z is used to test the null hypothesis that κ s = 0. The coefficient κ s allows one, in addition to evaluating rater agreement in a fashion parallel to κ, to (1) examine subsets of cells in agreement tables, (2) examine cells that indicate disagreement, (3) consider alternative chance models, (4) take covariates into account, and (5) compare independent samples. Results from a simulation study are reported, which suggest that (a) the four measures of rater agreement, Cohen's κ, Brennan and Prediger's κ n , raw agreement, and κ s are sensitive to the same data characteristics when evaluating rater agreement and (b) both the z-statistic for Cohen's κ and Stouffer's z for κ s are unimodally and symmetrically distributed, but slightly heavy-tailed. Examples use data from verbal processing and applicant selection.


Sign in / Sign up

Export Citation Format

Share Document