Leveraging eye tracking to understand children’s attention during game-based, tangible robotics activities

Author(s):  
Jennifer K. Olsen ◽  
Arzu Guneysu Ozgur ◽  
Kshitij Sharma ◽  
Wafa Johal
2019 ◽  
Vol 40 (1) ◽  
pp. 21-40
Author(s):  
Jane B. Childers ◽  
Blaire Porter ◽  
Megan Dolan ◽  
Clare B. Whitehead ◽  
Kevin P. McIntyre

To learn a verb, children must attend to objects and relations, often within a dynamic scene. Several studies show that comparing varied events linked to a verb helps children learn verbs, but there is also controversy in this area. This study asks whether children benefit from seeing variation across events as they learn a new verb, and uses an eye tracker to test whether children adjust their visual attention to specific objects to better understand how they may be comparing events to each other. Children saw events in which the tool varied, the affected object varied, or there was no variation (control). No prior verb study has tested children’s visual attention to specific objects under different variability conditions. We found 2½- and 3½-year-olds could extend verbs, and they were more successful with age. Analyses of the looking patterns in the learning phase show that children’s attention to specific objects in events varied by condition, and that reduced looking to the tool was linked to less success at test. Eye tracking can provide a more detailed view of what children attend to while learning a new verb, which should help us better understand how children are learning from variation across examples.


Appetite ◽  
2018 ◽  
Vol 125 ◽  
pp. 63-71 ◽  
Author(s):  
Ines Spielvogel ◽  
Jörg Matthes ◽  
Brigitte Naderer ◽  
Kathrin Karsay

Author(s):  
Kun Zhang ◽  
Lei Gao ◽  
Jingying Chen ◽  
Xiaodi Liu ◽  
Guangshuai Wang ◽  
...  

2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Sign in / Sign up

Export Citation Format

Share Document