scholarly journals Characterizing missed identifications and errors in latent fingerprint comparisons using eye-tracking data

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251674
Author(s):  
Thomas A. Busey ◽  
Nicholas Heise ◽  
R. Austin Hicklin ◽  
Bradford T. Ulery ◽  
JoAnn Buscaglia

Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies.

2021 ◽  
Vol 15 ◽  
Author(s):  
Lisa-Marie Vortmann ◽  
Jannes Knychalla ◽  
Sonja Annerer-Walcher ◽  
Mathias Benedek ◽  
Felix Putze

It has been shown that conclusions about the human mental state can be drawn from eye gaze behavior by several previous studies. For this reason, eye tracking recordings are suitable as input data for attentional state classifiers. In current state-of-the-art studies, the extracted eye tracking feature set usually consists of descriptive statistics about specific eye movement characteristics (i.e., fixations, saccades, blinks, vergence, and pupil dilation). We suggest an Imaging Time Series approach for eye tracking data followed by classification using a convolutional neural net to improve the classification accuracy. We compared multiple algorithms that used the one-dimensional statistical summary feature set as input with two different implementations of the newly suggested method for three different data sets that target different aspects of attention. The results show that our two-dimensional image features with the convolutional neural net outperform the classical classifiers for most analyses, especially regarding generalization over participants and tasks. We conclude that current attentional state classifiers that are based on eye tracking can be optimized by adjusting the feature set while requiring less feature engineering and our future work will focus on a more detailed and suited investigation of this approach for other scenarios and data sets.


Author(s):  
Peter Bickmann ◽  
Konstantin Wechsler ◽  
Kevin Rudolf ◽  
Chuck Tholl ◽  
Ingo Froböse ◽  
...  

In traditional sports like soccer or tennis, experts benefit from better anticipation abilities compared to novices through a more efficient gaze behavior. For electronic sports (eSports), this area is rather unexplored, although quick decision making, which is linked to gaze behavior, is considered fundamental in eSports. In this study, the gaze behavior of professional and non-professional eSports players (n=21, 23.4 ± 3.3 years) was recorded via eye-tracking in the sports simulation FIFA 19. Number, duration, and location of fixations were compared over an entire match and in offensive play situations. Except for fixation location, no significant differences were found. The players mainly fixated the same objectives regarding fixation number and duration, but professionals had significantly more fixations using the in-game radar and fixated off-ball teammates significantly shorter. Due to the limited results, gaze behavior does not seem to be a decisive factor for excellent performance in FIFA 19.


Author(s):  
Abner Cardoso da Silva ◽  
Cesar A. Sierra-Franco ◽  
Greis Francy M. Silva-Calpa ◽  
Felipe Carvalho ◽  
Alberto Barbosa Raposo

2011 ◽  
Vol 40 (594) ◽  
Author(s):  
Susanne Bødker

<span style="font-family: Arial; font-size: x-small;"><span style="font-family: Arial; font-size: x-small;"><p>Dual eye-tracking (DUET) is a promising methodology to study and support</p> <p>collaborative work. The method consists of simultaneously recording the gaze of two</p> <p>collaborators working on a common task. The main themes addressed in the workshop</p> <p>are eye-tracking methodology (how to translate gaze measures into descriptions of joint</p> <p>action, how to measure and model gaze alignment between collaborators, how to address</p> <p>task specificity inherent to eye-tracking data) and more generally future applications of</p> <p>dual eye-tracking in CSCW. The DUET workshop will bring together scholars who</p> <p>currently develop the approach as well as a larger audience interested in applications of</p> <p>eye-tracking in collaborative situations. The workshop format will combine paper</p> <p>presentations and discussions. The papers are available online as PDF documents at</p> <p>http://www.dualeyetracking.org/DUET2011/.</p></span></span>


2020 ◽  
Vol 11 ◽  
Author(s):  
Hikari Koyasu ◽  
Takefumi Kikusui ◽  
Saho Takagi ◽  
Miho Nagasawa

Dogs (Canis familiaris) and cats (Felis silvestris catus) have been domesticated through different processes. Dogs were the first domesticated animals, cooperating with humans by hunting and guarding. In contrast, cats were domesticated as predators of rodents and lived near human habitations when humans began to settle and farm. Although the domestication of dogs followed a different path from that of cats, and they have ancestors of a different nature, both have been broadly integrated into—and profoundly impacted—human society. The coexistence between dogs/cats and humans is based on non-verbal communication. This review focuses on “gaze,” which is an important signal for humans and describes the communicative function of dogs’ and cats’ eye-gaze behavior with humans. We discuss how the function of the gaze goes beyond communication to mutual emotional connection, namely “bond” formation. Finally, we present a research approach to multimodal interactions between dogs/cats and humans that participate in communication and bond formation.


2018 ◽  
Vol 38 (6) ◽  
pp. 658-672 ◽  
Author(s):  
Caroline Vass ◽  
Dan Rigby ◽  
Kelly Tate ◽  
Andrew Stewart ◽  
Katherine Payne

Background. Discrete choice experiments (DCEs) are increasingly used to elicit preferences for benefit-risk tradeoffs. The primary aim of this study was to explore how eye-tracking methods can be used to understand DCE respondents’ decision-making strategies. A secondary aim was to explore if the presentation and communication of risk affected respondents’ choices. Method. Two versions of a DCE were designed to understand the preferences of female members of the public for breast screening that varied in how risk attributes were presented. Risk was communicated as either 1) percentages or 2) icon arrays and percentages. Eye-tracking equipment recorded eye movements 1000 times a second. A debriefing survey collected sociodemographics and self-reported attribute nonattendance (ANA) data. A heteroskedastic conditional logit model analyzed DCE data. Eye-tracking data on pupil size, direction of motion, and total visual attention (dwell time) to predefined areas of interest were analyzed using ordinary least squares regressions. Results. Forty women completed the DCE with eye-tracking. There was no statistically significant difference in attention (fixations) to attributes between the risk communication formats. Respondents completing either version of the DCE with the alternatives presented in columns made more horizontal (left-right) saccades than vertical (up-down). Eye-tracking data confirmed self-reported ANA to the risk attributes with a 40% reduction in mean dwell time to the “probability of detecting a cancer” ( P = 0.001) and a 25% reduction to the “risk of unnecessary follow-up” ( P = 0.008). Conclusion. This study is one of the first to show how eye-tracking can be used to understand responses to a health care DCE and highlighted the potential impact of risk communication on respondents’ decision-making strategies. The results suggested self-reported ANA to cost attributes may not be reliable.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jessica Dawson ◽  
Alan Kingstone ◽  
Tom Foulsham

AbstractPeople are drawn to social, animate things more than inanimate objects. Previous research has also shown gaze following in humans, a process that has been linked to theory of mind (ToM). In three experiments, we investigated whether animacy and ToM are involved when making judgements about the location of a cursor in a scene. In Experiment 1, participants were told that this cursor represented the gaze of an observer and were asked to decide whether the observer was looking at a target object. This task is similar to that carried out by researchers manually coding eye-tracking data. The results showed that participants were biased to perceive the gaze cursor as directed towards animate objects (faces) compared to inanimate objects. In Experiments 2 and 3 we tested the role of ToM, by presenting the same scenes to new participants but now with the statement that the cursor was generated by a ‘random’ computer system or by a computer system designed to seek targets. The bias to report that the cursor was directed toward faces was abolished in Experiment 2, and minimised in Experiment 3. Together, the results indicate that people attach minds to the mere representation of an individual's gaze, and this attribution of mind influences what people believe an individual is looking at.


2020 ◽  
Vol 10 (13) ◽  
pp. 4508 ◽  
Author(s):  
Armel Quentin Tchanou ◽  
Pierre-Majorique Léger ◽  
Jared Boasen ◽  
Sylvain Senecal ◽  
Jad Adam Taher ◽  
...  

Gaze convergence of multiuser eye movements during simultaneous collaborative use of a shared system interface has been proposed as an important albeit sparsely explored construct in human-computer interaction literature. Here, we propose a novel index for measuring the gaze convergence of user dyads and address its validity through two consecutive eye-tracking studies. Eye-tracking data of user dyads were synchronously recorded while they simultaneously performed tasks on shared system interfaces. Results indicate the validity of the proposed gaze convergence index for measuring the gaze convergence of dyads. Moreover, as expected, our gaze convergence index was positively associated with dyad task performance and negatively associated with dyad cognitive load. These results suggest the utility of (theoretical or practical) applications such as synchronized gaze convergence displays in diverse settings. Further research perspectives, particularly into the construct’s nomological network, are warranted.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7668
Author(s):  
Niharika Kumari ◽  
Verena Ruf ◽  
Sergey Mukhametov ◽  
Albrecht Schmidt ◽  
Jochen Kuhn ◽  
...  

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.


2021 ◽  
Vol 11 (19) ◽  
pp. 8794
Author(s):  
Yen-Nan Lin ◽  
Jun Wang ◽  
Yu Su ◽  
I-Lin Wang

Background: The purpose of this study was to explore the gaze behavior of tennis players with different skill levels when receiving serves through eye movement information. Methods: The skill level was divided into group A (experts, with more than 10 years of playing experience) and group B (novices, with less than 2 years of playing experience). We compared the differences in gaze behavior between groups A and B at the head-shoulder, trunk-hips, arm-hand, leg-foot, racket, ball, and racket-ball contact area seven positions using the Eye-gaze Response Interface Computer Aid (ERICA) device. Data were analyzed using two-way ANOVA. Results: Compared with the novices, the experts have more gaze time in the head–shoulders, rack, and ball when serving forehand (p < 0.01). The experts also have more gaze time on the head–shoulders, trunk–hips, racket, ball, and racket–ball contact area when serving backhand (p < 0.05). Conclusions: Expert athletes have a longer stare time for a specific position, which mainly determines the direction of the ball. Tennis coaches can increase the gaze time for these four positions and improve tennis players’ ability to predict the direction of the ball.


Sign in / Sign up

Export Citation Format

Share Document