scholarly journals An eye-tracking study of feature-based choice in one-shot games

2015 ◽  
Vol 19 (1) ◽  
pp. 177-201 ◽  
Author(s):  
Giovanna Devetag ◽  
Sibilla Di Guida ◽  
Luca Polonio
Keyword(s):  
2017 ◽  
Vol 34 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Monica D. Hernandez ◽  
Yong Wang ◽  
Hong Sheng ◽  
Morris Kalliny ◽  
Michael Minor

Purpose The authors aim to examine the effect of location-driven logo placement on attention and memory on the web addressing differences between individuals that read unidirectionally (left-to-right [LTR]) versus bidirectionally (both right-to-left and LTR). Design/methodology/approach Using an eye-tracking approach combined with traditional verbal measures, the authors compared attention and memory measures from a sample composed of bidirectional (Arab/English) readers and unidirectional readers. Findings The findings reveal that unidirectional and bidirectional readers differ in attention patterns. Compared to bidirectional readers, unidirectional readers pay less attention to the logo on the bottom right corner of the webpage based on verbal measures. The eye-tracking data of the two groups further identify differences based on total hits and duration time. Unidirectional LTR readers demonstrate higher fluency in feature-based attention whereas bidirectional readers show higher fluency in spatial attention. Originality/value The authors expand on scarce research on reading direction bias effect on location-driven stimuli placement in online settings. They contribute to the understanding of the differences between unidirectional and bidirectional readers in their cognitive responses (attention and memory) to organization of marketing stimuli.


2021 ◽  
Author(s):  
Daniel Bennett ◽  
Angela Radulescu ◽  
Samuel Zorowitz ◽  
Valkyrie Felso ◽  
Yael Niv

Positive and negative affective states are respectively associated with optimistic and pessimistic expectations regarding future reward. One mechanism that might underlie these affect-related expectation biases is attention to positive- versus negative-valence stimulus features (e.g., attending to the positive reviews of a restaurant versus its expensive price). Here we tested the effects of experimentally induced positive and negative affect on feature-based attention in 120 participants completing a compound-generalization task with eye-tracking. We found that participants' reward expectations for novel compound stimuli were modulated by the affect induction in an affect-congruent way: positive affect increased reward expectations for compounds, whereas negative affect decreased reward expectations. Computational modelling and eye-tracking analyses each revealed that these effects were driven by affect-congruent changes in participants' allocation of attention to high- versus low-value features of compound stimuli. These results provide mechanistic insight into a process by which affect produces biases in generalized reward expectations.


Author(s):  
Shoya Ishimaru ◽  
Kai Kunze ◽  
Yuzuko Utsumi ◽  
Masakazu Iwamura ◽  
Koichi Kise
Keyword(s):  

Psihologija ◽  
2009 ◽  
Vol 42 (4) ◽  
pp. 417-436 ◽  
Author(s):  
Vanja Kovic ◽  
Kim Plunkett ◽  
Gert Westermann

Unlike the animate objects, where participants were consistent in their looking patterns, for inanimates it was difficult to identify both consistent areas of fixations and a consistent order of fixations. Furthermore, in comparison to animate objects, in animates received significantly shorter total looking time, shorter longest looks and a smaller number of overall fixations. However, as with animates, looking patterns did not systematically differ between the naming and non-naming conditions. These results suggested that animacy, but not labelling, impacts on looking behavior in this paradigm. In the light of feature-based accounts of semantic memory organization, one could interpret these findings as suggesting that processing of the animate objects is based on the saliency/diagnosticity of their visual features (which is then reflected through participants eye-movements towards those features), whereas processing of the inanimate objects is based more on functional features (which cannot be easily captured by looking behavior in such a paradigm).


2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Sign in / Sign up

Export Citation Format

Share Document