Eye Tracking While Answering Questions in Electronic Multimedia Environments

2002 ◽  
Author(s):  
Arthur C. Graesser
Author(s):  
Duygu Mutlu-Bayraktar ◽  
Servet Bayram

In this chapter, situations that can cause split of attention in multimedia environments were determined via eye tracking method. Fixation numbers, heat maps and area of interest of learners were analyzed. As a result of these analyses, design suggestions were determined for multimedia environments to provide focusing attention to content without split attention effect. Visual and auditory resources should be provided simultaneously. Visual information should be supported with auditory expression instead of texts. Images such as videos, pictures and texts should not be presented on the same screen. Texts provided with pictures should be presented via integration to each other instead of separate presentation of text and picture. Texts provided with videos should be presented via integration to each other instead of separate presentation of text and video. Images should be given via marking important points on images to increase attention.


2018 ◽  
pp. 348-372
Author(s):  
Duygu Mutlu-Bayraktar ◽  
Servet Bayram

In this chapter, situations that can cause split of attention in multimedia environments were determined via eye tracking method. Fixation numbers, heat maps and area of interest of learners were analyzed. As a result of these analyses, design suggestions were determined for multimedia environments to provide focusing attention to content without split attention effect. Visual and auditory resources should be provided simultaneously. Visual information should be supported with auditory expression instead of texts. Images such as videos, pictures and texts should not be presented on the same screen. Texts provided with pictures should be presented via integration to each other instead of separate presentation of text and picture. Texts provided with videos should be presented via integration to each other instead of separate presentation of text and video. Images should be given via marking important points on images to increase attention.


2020 ◽  
Vol 8 (6) ◽  
pp. 59-76
Author(s):  
Fang Zhao ◽  
Robert Gaschler ◽  
Wolfgang Schnotz ◽  
Inga Wagner

Regulation of distance to the screen (i.e., head-to-screen distance, fluctuation of head-to-screen distance) has been proved to reflect the cognitive engagement of the reader. However, it is still not clear (a) whether regulation of distance to the screen can be a potential parameter to infer high cognitive load and (b) whether it can predict the upcoming answer accuracy. Configuring tablets or other learning devices in a way that distance to the screen can be analyzed by the learning software is in close reach. The software might use the measure as a person-specific indicator of need for extra scaffolding. In order to better gauge this potential, we analyzed eye-tracking data of children (N = 144, Mage = 13 years, SD = 3.2 years) engaging in multimedia learning, as distance to the screen is estimated as a by-product of eye tracking. Children were told to maintain a still seated posture while reading and answering questions at three difficulty levels (i.e., easy vs. medium vs. difficult). Results yielded that task difficulty influences how well the distance to the screen can be regulated, supporting that regulation of distance to the screen is a promising measure. Closer head-to-screen distance and larger fluctuation of head-to-screen distance can reflect that participants are engaging in a challenging task. Only large fluctuation of head-to-screen distance can predict the future incorrect answers. The link between distance to the screen and processing of cognitive task can obtrusively embody reader’s cognitive states during system usage, which can support adaptive learning and testing.


2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Sign in / Sign up

Export Citation Format

Share Document