Language-mediated eye movements in the absence of a visual world: the ‘blank screen paradigm’

Cognition ◽  
2004 ◽  
Vol 93 (2) ◽  
pp. B79-B87 ◽  
Author(s):  
Gerry T.M Altmann
Author(s):  
Michael K. Tanenhaus

Recently, eye movements have become a widely used response measure for studying spoken language processing in both adults and children, in situations where participants comprehend and generate utterances about a circumscribed “Visual World” while fixation is monitored, typically using a free-view eye-tracker. Psycholinguists now use the Visual World eye-movement method to study both language production and language comprehension, in studies that run the gamut of current topics in language processing. Eye movements are a response measure of choice for addressing many classic questions about spoken language processing in psycholinguistics. This article reviews the burgeoning Visual World literature on language comprehension, highlighting some of the seminal studies and examining how the Visual World approach has contributed new insights to our understanding of spoken word recognition, parsing, reference resolution, and interactive conversation. It considers some of the methodological issues that come to the fore when psycholinguists use eye movements to examine spoken language comprehension.


2011 ◽  
Vol 137 (2) ◽  
pp. 172-180 ◽  
Author(s):  
Anne Pier Salverda ◽  
Meredith Brown ◽  
Michael K. Tanenhaus
Keyword(s):  

Nature ◽  
1958 ◽  
Vol 182 (4644) ◽  
pp. 1214-1216 ◽  
Author(s):  
R. L. GREGORY

2017 ◽  
Vol 21 (2) ◽  
pp. 251-264 ◽  
Author(s):  
AINE ITO ◽  
MARTIN CORLEY ◽  
MARTIN J. PICKERING

We used the visual world eye-tracking paradigm to investigate the effects of cognitive load on predictive eye movements in L1 (Experiment 1) and L2 (Experiment 2) speakers. Participants listened to sentences whose verb was predictive or non-predictive towards one of four objects they were viewing. They then clicked on a mentioned object. Half the participants additionally performed a working memory task of remembering words. Both L1 and L2 speakers looked more at the target object predictively in predictable- than in non-predictable sentences when they performed the listen-and-click task only. However, this predictability effect was delayed in those who performed the concurrent memory task. This pattern of results was similar in L1 and L2 speakers. L1 and L2 speakers make predictions, but cognitive resources are required for making predictive eye movements. The findings are compatible with the claim that L2 speakers use the same mechanisms as L1 speakers to make predictions.


2019 ◽  
Author(s):  
Michael Armson ◽  
Nicholas Diamond ◽  
Laryssa Levesque ◽  
Jennifer Ryan ◽  
Brian Levine

The precise role of visual mechanisms in recollection of personal past events is unknown. The present study addresses this question from the oculomotor perspective. Participants freely recalled past episodes while viewing a blank screen under free and fixed viewing conditions. Memory performance was quantified with the Autobiographical Interview, which separates internal (episodic) and external (non-episodic) details. In Study 1, fixation rate was predictive of the number of internal (but not external) details recalled across both free and fixed viewing. In Study 2, using an experimenter-controlled staged event, we again observed the effect of fixations on free recall of internal (but not external) details, but this was modulated by individual differences in AM, such that the coupling between fixations and internal details was greater for those endorsing higher than lower episodic AM. These results suggest that eye movements promote richness in autobiographical recall, particularly for those with strong AM.


Author(s):  
Fiona Mulvey

This chapter introduces the basics of eye anatomy, eye movements and vision. It will explain the concepts behind human vision sufficiently for the reader to understand later chapters in the book on human perception and attention, and their relationship to (and potential measurement with) eye movements. We will first describe the path of light from the environment through the structures of the eye and on to the brain, as an introduction to the physiology of vision. We will then describe the image registered by the eye, and the types of movements the eye makes in order to perceive the environment as a cogent whole. This chapter explains how eye movements can be thought of as the interface between the visual world and the brain, and why eye movement data can be analysed not only in terms of the environment, or what is looked at, but also in terms of the brain, or subjective cognitive and emotional states. These two aspects broadly define the scope and applicability of eye movements technology in research and in human computer interaction in later sections of the book.


2021 ◽  
pp. 1-6
Author(s):  
Quentin Lenoble ◽  
Mohamad El Haj

Abstract. There has been a surge in social cognition and social neurosciences research comparing laboratory and real eye movements. Eye movements during the retrieval of autobiographical memories (i.e., personal memories) in laboratory situations are also receiving more attention. We compared eye movements during the retrieval of autobiographical memories using a strict laboratory design versus a design mimicking social interactions. In the first design, eye movements were recorded during autobiographical memory retrieval while participants were looking at a blank screen; in the second design, participants wore eye-tracking glasses and communicated autobiographical memories to the experimenter. Compared with the “screen” design, the “glasses” design yielded more fixations ( p < .05), shorter duration of fixations ( p < .001), more saccades ( p < .01), and longer duration of saccades ( p < .001). These findings demonstrate how eye movements during autobiographical memory retrieval differ between strict laboratory design and face-to-face interactions.


2017 ◽  
Vol 117 (2) ◽  
pp. 808-817 ◽  
Author(s):  
Kyriaki Mikellidou ◽  
Marco Turi ◽  
David C. Burr

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.


Sign in / Sign up

Export Citation Format

Share Document