scholarly journals Visual context constrains language-mediated anticipatory eye movements

2019 ◽  
Vol 73 (3) ◽  
pp. 458-467
Author(s):  
Florian Hintz ◽  
Antje S Meyer ◽  
Falk Huettig

Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: the target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 s before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 s after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.

2012 ◽  
Vol 5 (1) ◽  
Author(s):  
Ramesh K. Mishra ◽  
Niharika Singh ◽  
Aparna Pandey ◽  
Falk Huettig

We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates.


2021 ◽  
Author(s):  
Umesh Patil ◽  
Sol Lago

We propose a retrieval interference-based explanation of a prediction advantage effect observed in Stone et al. (2021). They reported two dual-task eye-tracking experiments in which participants listened to instructions involving German possessive pronouns, e.g. ‘Click on his blue button’, and were asked to select the correct object from a set of objects displayed on screen. Participants’ eye movements showed predictive processing, such that the target object was fixated before its name was heard. Moreover, when the target and the antecedent of the pronoun matched in gender, predictions arose earlier than when the two genders mismatched — a prediction advantage. We propose that the prediction advantage arises due to similarity-based interference during antecedent retrieval, such that the overlap of gender features between the antecedent and possessum boosts the activation level of the latter and helps predict it faster. We report an ACT-R model supporting this hypothesis. Our model also provides a computational implementation of the idea that prediction can be thought of as memory retrieval. In addition, we provide a preliminary ACT-R model of how linguistic processes could drive changes in visual attention.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ernesto Guerra ◽  
Jasmin Bernotat ◽  
Héctor Carvacho ◽  
Gerd Bohner

Immediate contextual information and world knowledge allow comprehenders to anticipate incoming language in real time. The cognitive mechanisms that underlie such behavior are, however, still only partially understood. We examined the novel idea that gender attitudes may influence how people make predictions during sentence processing. To this end, we conducted an eye-tracking experiment where participants listened to passive-voice sentences expressing gender-stereotypical actions (e.g., “The wood is being painted by the florist”) while observing displays containing both female and male characters representing gender-stereotypical professions (e.g., florists, soldiers). In addition, we assessed participants’ explicit gender-related attitudes to explore whether they might predict potential effects of gender-stereotypical information on anticipatory eye movements. The observed gaze pattern reflected that participants used gendered information to predict who was agent of the action. These effects were larger for female- vs. male-stereotypical contextual information but were not related to participants’ gender-related attitudes. Our results showed that predictive language processing can be moderated by gender stereotypes, and that anticipation is stronger for female (vs. male) depicted characters. Further research should test the direct relation between gender-stereotypical sentence processing and implicit gender attitudes. These findings contribute to both social psychology and psycholinguistics research, as they extend our understanding of stereotype processing in multimodal contexts and regarding the role of attitudes (on top of world knowledge) in language prediction.


2020 ◽  
Vol 73 (12) ◽  
pp. 2348-2361
Author(s):  
Leigh B Fernandez ◽  
Paul E Engelhardt ◽  
Angela G Patarroyo ◽  
Shanley EM Allen

Research has shown that suprasegmental cues in conjunction with visual context can lead to anticipatory (or predictive) eye movements. However, the impact of speech rate on anticipatory eye movements has received little empirical attention. The purpose of the current study was twofold. From a methodological perspective, we tested the impact of speech rate on anticipatory eye movements by systemically varying speech rate (3.5, 4.5, 5.5, and 6.0 syllables per second) in the processing of filler-gap dependencies. From a theoretical perspective, we examined two groups thought to show fewer anticipatory eye movements, and thus likely to be more impacted by speech rate. Experiment 1 compared anticipatory eye movements across the lifespan with younger (18–24 years old) and older adults (40–75 years old). Experiment 2 compared L1 speakers of English and L2 speakers of English with an L1 of German. Results showed that all groups made anticipatory eye movements. However, L2 speakers only made anticipatory eye movements at 3.5 syllables per second, older adults at 3.5 and 4.5 syllables per second, and younger adults at speech rates up to 5.5 syllables per second. At the fastest speech rate, all groups showed a marked decrease in anticipatory eye movements. This work highlights (1) the importance of speech rate on anticipatory eye movements, and (2) group-level performance differences in filler-gap prediction.


2010 ◽  
Vol 38 (3) ◽  
pp. 644-661 ◽  
Author(s):  
YI TING HUANG ◽  
JESSE SNEDEKER

ABSTRACTRecent work in adult psycholinguistics has demonstrated that activation of semantic representations begins long before phonological processing is complete. This incremental propagation of information across multiple levels of analysis is a hallmark of adult language processing but how does this ability develop? In two experiments, we elicit measures of incremental activation of semantic representations during word recognition in children. Five-year-olds were instructed to select a target (logs) while their eye-movements were measured to a competitor (key) that was semantically related to an absent phonological associate (lock). We found that, like adults, children made increased looks to competitors relative to unrelated control items. However, unlike adults, children continued to look at the competitor even after the target word was uniquely identified and were more likely to incorrectly select this item. Altogether, these results suggest that early lexical processing involves cascading activation but less efficient resolution of competing entries.


2018 ◽  
Author(s):  
Kyle Earl MacDonald ◽  
Virginia Marchman ◽  
Anne Fernald ◽  
Michael C. Frank

During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite noise in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information that supports their language understanding. We present two case studies of eye movements during real-time language processing where the value of fixating on a social partner varies across different contexts. First, compared to children learning spoken English (n=80), young American Sign Language (ASL) learners (n=30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result suggests that ASL learners adapt to dividing attention between language and referents, which both compete for processing via the same channel: vision. Second, English-speaking preschoolers (n=39) and adults (n=31) delayed the timing of gaze shifts away from a speaker’s face while processing language in a noisy auditory environment. This delay resulted in a higher proportion of language-consistent gaze shifts. These results suggest that young listeners can adapt their gaze to seek supportive visual information from social partners during real-time language comprehension.


Author(s):  
Rebecca A. Hayes ◽  
Michael Walsh Dickey ◽  
Tessa Warren

Purpose This study examined the influence of verb–argument information and event-related plausibility on prediction of upcoming event locations in people with aphasia, as well as older and younger, neurotypical adults. It investigated how these types of information interact during anticipatory processing and how the ability to take advantage of the different types of information is affected by aphasia. Method This study used a modified visual-world task to examine eye movements and offline photo selection. Twelve adults with aphasia (aged 54–82 years) as well as 44 young adults (aged 18–31 years) and 18 older adults (aged 50–71 years) participated. Results Neurotypical adults used verb argument status and plausibility information to guide both eye gaze (a measure of anticipatory processing) and image selection (a measure of ultimate interpretation). Argument status did not affect the behavior of people with aphasia in either measure. There was only limited evidence of interaction between these 2 factors in eye gaze data. Conclusions Both event-related plausibility and verb-based argument status contributed to anticipatory processing of upcoming event locations among younger and older neurotypical adults. However, event-related likelihood had a much larger role in the performance of people with aphasia than did verb-based knowledge regarding argument structure.


2019 ◽  
Author(s):  
Kyle Earl MacDonald ◽  
Elizabeth Swanson ◽  
Michael C. Frank

Face-to-face communication provides access to visual information that can support language processing. But do listeners automatically seek social information without regard to the language processing task? Here, we present two eye-tracking studies that ask whether listeners’ knowledge of word-object links changes how they actively gather a social cue to reference (eye gaze) during real-time language processing. First, when processing familiar words, children and adults did not delay their gaze shifts to seek a disambiguating gaze cue. When processing novel words, however, children and adults fixated longer on a speaker who provided a gaze cue, which led to an increase in looking to the named object and less looking to the other object in the scene. These results suggest that listeners use their knowledge of object labels when deciding how to allocate visual attention to social partners, which in turn changes the visual input to language processing mechanisms.


2018 ◽  
Author(s):  
Evelien Heyselaar ◽  
David Peeters ◽  
Peter Hagoort

AbstractThe ability to predict upcoming actions is a characteristic hallmark of cognition and therefore not surprisingly a central topic in cognitive science. It remains unclear, however, whether the predictive behaviour commonly observed in strictly controlled lab environments generalizes to rich, everyday settings. In four virtual reality experiments, we tested whether a well-established marker of linguistic prediction (i.e. anticipatory eye movements as observed in the visual world paradigm) replicated when increasing the naturalness of the paradigm by means of i) immersing participants in naturalistic everyday scenes, ii) increasing the number of distractor objects present, iii) manipulating the location of referents in central versus peripheral vision, and iv) modifying the proportion of predictable noun-referents in the experiment. Robust anticipatory eye movements were observed, even in the presence of 10 objects (hereby testing working memory) and when only 25% of all sentences contained a visually present referent (hereby testing error-based learning). The anticipatory effect disappeared, however, when referents were placed in peripheral vision. Together, our findings suggest that working memory may play an important role in predictive processing in everyday communication, but only in contexts where upcoming referents have been explicitly attended to prior to encountering the spoken referential act. Methodologically, our study confirms that ecological validity and experimental control may go hand in hand in future studies of human predictive behaviour.


2016 ◽  
Vol 20 (5) ◽  
pp. 917-930 ◽  
Author(s):  
ASTER DIJKGRAAF ◽  
ROBERT J. HARTSUIKER ◽  
WOUTER DUYCK

Monolingual listeners continuously predict upcoming information. Here, we tested whether predictive language processing occurs to the same extent when bilinguals listen to their native language vs. a non-native language. Additionally, we tested whether bilinguals use prediction to the same extent as monolinguals. Dutch–English bilinguals and English monolinguals listened to constraining and neutral sentences in Dutch (bilinguals only) and in English, and viewed target and distractor pictures on a display while their eye movements were measured. There was a bias of fixations towards the target object in the constraining condition, relative to the neutral condition, before information from the target word could affect fixations. This prediction effect occurred to the same extent in native processing by bilinguals and monolinguals, but also in non-native processing. This indicates that unbalanced, proficient bilinguals can quickly use semantic information during listening to predict upcoming referents to the same extent in both of their languages.


Sign in / Sign up

Export Citation Format

Share Document