Using eye tracking to compare how adults and children learn to use an unfamiliar computer game

Author(s):  
Marco Pretorius ◽  
Helene Gelderblom ◽  
Bester Chimbo
2010 ◽  
Vol 38 (1) ◽  
pp. 222-234 ◽  
Author(s):  
EVAN KIDD ◽  
ANDREW J. STEWART ◽  
LUDOVICA SERRATRICE

ABSTRACTIn this paper we report on a visual world eye-tracking experiment that investigated the differing abilities of adults and children to use referential scene information during reanalysis to overcome lexical biases during sentence processing. The results showed that adults incorporated aspects of the referential scene into their parse as soon as it became apparent that a test sentence was syntactically ambiguous, suggesting they considered the two alternative analyses in parallel. In contrast, the children appeared not to reanalyze their initial analysis, even over shorter distances than have been investigated in prior research. We argue that this reflects the children's over-reliance on bottom-up, lexical cues to interpretation. The implications for the development of parsing routines are discussed.


2017 ◽  
Vol 167 ◽  
pp. 13-27 ◽  
Author(s):  
A.R. Weighall ◽  
L.M. Henderson ◽  
D.J. Barr ◽  
S.A. Cairney ◽  
M.G. Gaskell

Author(s):  
Tamara Wygnanski-Jaffe ◽  
Abraham Spierer ◽  
Michael Belkin ◽  
Dan Oz ◽  
Oren Yehezkel

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2101 ◽  
Author(s):  
Roberto Pierdicca ◽  
Marina Paolanti ◽  
Ramona Quattrini ◽  
Marco Mameli ◽  
Emanuele Frontoni

In the Cultural Heritage (CH) context, art galleries and museums employ technology devices to enhance and personalise the museum visit experience. However, the most challenging aspect is to determine what the visitor is interested in. In this work, a novel Visual Attentive Model (VAM) has been proposed that is learned from eye tracking data. In particular, eye-tracking data of adults and children observing five paintings with similar characteristics have been collected. The images are selected by CH experts and are—the three “Ideal Cities” (Urbino, Baltimore and Berlin), the Inlaid chest in the National Gallery of Marche and Wooden panel in the “Studiolo del Duca” with Marche view. These pictures have been recognized by experts as having analogous features thus providing coherent visual stimuli. Our proposed method combines a new coordinates representation from eye sequences by using Geometric Algebra with a deep learning model for automated recognition (to identify, differentiate, or authenticate individuals) of people by the attention focus of distinctive eye movement patterns. The experiments were conducted by comparing five Deep Convolutional Neural Networks (DCNNs), yield high accuracy (more than 80 %), demonstrating the effectiveness and suitability of the proposed approach in identifying adults and children as museums’ visitors.


2004 ◽  
Vol 42 (1) ◽  
pp. 91-108 ◽  
Author(s):  
Chern-Sheng Lin ◽  
Chia-Chin Huan ◽  
Chao-Ning Chan ◽  
Mau-Shiun Yeh ◽  
Chuang-Chien Chiu

2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


Sign in / Sign up

Export Citation Format

Share Document