anticipatory eye movements
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 12)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Ernesto Guerra ◽  
Jasmin Bernotat ◽  
Héctor Carvacho ◽  
Gerd Bohner

Immediate contextual information and world knowledge allow comprehenders to anticipate incoming language in real time. The cognitive mechanisms that underlie such behavior are, however, still only partially understood. We examined the novel idea that gender attitudes may influence how people make predictions during sentence processing. To this end, we conducted an eye-tracking experiment where participants listened to passive-voice sentences expressing gender-stereotypical actions (e.g., “The wood is being painted by the florist”) while observing displays containing both female and male characters representing gender-stereotypical professions (e.g., florists, soldiers). In addition, we assessed participants’ explicit gender-related attitudes to explore whether they might predict potential effects of gender-stereotypical information on anticipatory eye movements. The observed gaze pattern reflected that participants used gendered information to predict who was agent of the action. These effects were larger for female- vs. male-stereotypical contextual information but were not related to participants’ gender-related attitudes. Our results showed that predictive language processing can be moderated by gender stereotypes, and that anticipation is stronger for female (vs. male) depicted characters. Further research should test the direct relation between gender-stereotypical sentence processing and implicit gender attitudes. These findings contribute to both social psychology and psycholinguistics research, as they extend our understanding of stereotype processing in multimodal contexts and regarding the role of attitudes (on top of world knowledge) in language prediction.


2020 ◽  
Vol 73 (12) ◽  
pp. 2348-2361
Author(s):  
Leigh B Fernandez ◽  
Paul E Engelhardt ◽  
Angela G Patarroyo ◽  
Shanley EM Allen

Research has shown that suprasegmental cues in conjunction with visual context can lead to anticipatory (or predictive) eye movements. However, the impact of speech rate on anticipatory eye movements has received little empirical attention. The purpose of the current study was twofold. From a methodological perspective, we tested the impact of speech rate on anticipatory eye movements by systemically varying speech rate (3.5, 4.5, 5.5, and 6.0 syllables per second) in the processing of filler-gap dependencies. From a theoretical perspective, we examined two groups thought to show fewer anticipatory eye movements, and thus likely to be more impacted by speech rate. Experiment 1 compared anticipatory eye movements across the lifespan with younger (18–24 years old) and older adults (40–75 years old). Experiment 2 compared L1 speakers of English and L2 speakers of English with an L1 of German. Results showed that all groups made anticipatory eye movements. However, L2 speakers only made anticipatory eye movements at 3.5 syllables per second, older adults at 3.5 and 4.5 syllables per second, and younger adults at speech rates up to 5.5 syllables per second. At the fastest speech rate, all groups showed a marked decrease in anticipatory eye movements. This work highlights (1) the importance of speech rate on anticipatory eye movements, and (2) group-level performance differences in filler-gap prediction.


2020 ◽  
Vol 16 (4) ◽  
pp. e1007438 ◽  
Author(s):  
Chloé Pasturel ◽  
Anna Montagnini ◽  
Laurent Udo Perrinet

2019 ◽  
Vol 73 (3) ◽  
pp. 458-467
Author(s):  
Florian Hintz ◽  
Antje S Meyer ◽  
Falk Huettig

Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: the target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 s before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 s after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.


2019 ◽  
Vol 10 ◽  
Author(s):  
Roberto G. de Almeida ◽  
Julia Di Nardo ◽  
Caitlyn Antal ◽  
Michael W. von Grünau

2019 ◽  
Author(s):  
Chloé Pasturel ◽  
Anna Montagnini ◽  
Laurent Udo Perrinet

AbstractAnimal behavior constantly adapts to changes, for example when the statistical properties of the environment change unexpectedly. For an agent that interacts with this volatile setting, it is important to react accurately and as quickly as possible. It has already been shown that when a random sequence of motion ramps of a visual target is biased to one direction (e.g. right or left), human observers adapt their eye movements to accurately anticipate the target’s expected direction. Here, we prove that this ability extends to a volatile environment where the probability bias could change at random switching times. In addition, we also recorded the explicit prediction of the next outcome as reported by observers using a rating scale. Both results were compared to the estimates of a probabilistic agent that is optimal in relation to the assumed generative model. Compared to the classical leaky integrator model, we found a better match between our probabilistic agent and the behavioral responses, both for the anticipatory eye movements and the explicit task. Furthermore, by controlling the level of preference between exploitation and exploration in the model, we were able to fit for each individual’s experimental dataset the most likely level of volatility and analyze inter-individual variability across participants. These results prove that in such an unstable environment, human observers can still represent an internal belief about the environmental contingencies, and use this representation both for sensory-motor control and for explicit judgments. This work offers an innovative approach to more generically test the diversity of human cognitive abilities in uncertain and dynamic environments.Author summaryUnderstanding how humans adapt to changing environments to make judgments or plan motor responses based on time-varying sensory information is crucial for psychology, neuroscience and artificial intelligence. Current theories for how we deal with the environment’s uncertainty, that is, in response to the introduction of some randomness change, mostly rely on the behavior at equilibrium, long after after a change. Here, we show that in the more ecological case where the context switches at random times all along the experiment, an adaptation to this volatility can be performed online. In particular, we show in two behavioral experiments that humans can adapt to such volatility at the early sensorimotor level, through their anticipatory eye movements, but also at a higher cognitive level, through explicit ratings. Our results suggest that humans (and future artificial systems) can use much richer adaptive strategies than previously assumed.


2019 ◽  
Vol 7 (3) ◽  
pp. 219-242 ◽  
Author(s):  
Kyle J. Comishen ◽  
Scott A. Adler

The capacity to process and incorporate temporal information into behavioural decisions is an integral component for functioning in our environment. Whereas previous research has extended adults’ temporal processing capacity down the developmental timeline to infants, little research has examined infants’ capacity to use that temporal information in guiding their future behaviours and whether this capacity can detect event-timing differences on the order of milliseconds. The present study examined 3- and 6-month-old infants’ ability to process temporal durations of 700 and 1200 milliseconds by means of the Visual Expectation Cueing Paradigm in which the duration of a central stimulus predicted either a target appearing on the left or on the right of a screen. If 3- and 6-month-old infants could discriminate the milliseconds difference between the centrally-presented temporal cues, then they would correctly make anticipatory eye movements to the proper target location at a rate above chance. Results indicated that 6- but not 3-month-olds successfully discriminated and incorporated events’ temporal information into their visual expectations. Brain maturation and the perceptual capacity to discriminate the relative timing values of temporal events may account for these findings. This developmental limitation in processing and discriminating events on the scale of milliseconds, consequently, may be a limiting factor for attentional and cognitive development that has not previously been explored.


Sign in / Sign up

Export Citation Format

Share Document