scholarly journals Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation

Cognition ◽  
2009 ◽  
Vol 111 (1) ◽  
pp. 55-71 ◽  
Author(s):  
Gerry T.M. Altmann ◽  
Yuki Kamide
Author(s):  
Michael K. Tanenhaus

Recently, eye movements have become a widely used response measure for studying spoken language processing in both adults and children, in situations where participants comprehend and generate utterances about a circumscribed “Visual World” while fixation is monitored, typically using a free-view eye-tracker. Psycholinguists now use the Visual World eye-movement method to study both language production and language comprehension, in studies that run the gamut of current topics in language processing. Eye movements are a response measure of choice for addressing many classic questions about spoken language processing in psycholinguistics. This article reviews the burgeoning Visual World literature on language comprehension, highlighting some of the seminal studies and examining how the Visual World approach has contributed new insights to our understanding of spoken word recognition, parsing, reference resolution, and interactive conversation. It considers some of the methodological issues that come to the fore when psycholinguists use eye movements to examine spoken language comprehension.


2011 ◽  
Vol 137 (2) ◽  
pp. 172-180 ◽  
Author(s):  
Anne Pier Salverda ◽  
Meredith Brown ◽  
Michael K. Tanenhaus
Keyword(s):  

Nature ◽  
1958 ◽  
Vol 182 (4644) ◽  
pp. 1214-1216 ◽  
Author(s):  
R. L. GREGORY

2017 ◽  
Vol 21 (2) ◽  
pp. 251-264 ◽  
Author(s):  
AINE ITO ◽  
MARTIN CORLEY ◽  
MARTIN J. PICKERING

We used the visual world eye-tracking paradigm to investigate the effects of cognitive load on predictive eye movements in L1 (Experiment 1) and L2 (Experiment 2) speakers. Participants listened to sentences whose verb was predictive or non-predictive towards one of four objects they were viewing. They then clicked on a mentioned object. Half the participants additionally performed a working memory task of remembering words. Both L1 and L2 speakers looked more at the target object predictively in predictable- than in non-predictable sentences when they performed the listen-and-click task only. However, this predictability effect was delayed in those who performed the concurrent memory task. This pattern of results was similar in L1 and L2 speakers. L1 and L2 speakers make predictions, but cognitive resources are required for making predictive eye movements. The findings are compatible with the claim that L2 speakers use the same mechanisms as L1 speakers to make predictions.


2003 ◽  
Vol 62 (2) ◽  
pp. 103-111 ◽  
Author(s):  
Jacqueline Waniek ◽  
Angela Brunstein ◽  
Anja Naumann ◽  
Josef F. Krems

Hypertext research results suggest that building a correct representation of the hypertext structure enables users to navigate effectively within the text. Therefore, text comprehension processes involved in hypertext reading should be investigated. In an experimental study, we differentiated the text structure from the dimensions of a postulated coherent situation model in order to compare them. Three electronic text versions, varying in navigational facility, and text structure visualization were compared with respect to orientation, navigation, eye movements, mental representation of text structure and content (situation model). Results demonstrate that when text structure visualization was unavailable, a reorganization of readers’ representations of the text structure towards their situation model took place. Navigation within the text particularly affected mental representation of text structure and content.


Author(s):  
Fiona Mulvey

This chapter introduces the basics of eye anatomy, eye movements and vision. It will explain the concepts behind human vision sufficiently for the reader to understand later chapters in the book on human perception and attention, and their relationship to (and potential measurement with) eye movements. We will first describe the path of light from the environment through the structures of the eye and on to the brain, as an introduction to the physiology of vision. We will then describe the image registered by the eye, and the types of movements the eye makes in order to perceive the environment as a cogent whole. This chapter explains how eye movements can be thought of as the interface between the visual world and the brain, and why eye movement data can be analysed not only in terms of the environment, or what is looked at, but also in terms of the brain, or subjective cognitive and emotional states. These two aspects broadly define the scope and applicability of eye movements technology in research and in human computer interaction in later sections of the book.


2017 ◽  
Vol 117 (2) ◽  
pp. 808-817 ◽  
Author(s):  
Kyriaki Mikellidou ◽  
Marco Turi ◽  
David C. Burr

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.


Sign in / Sign up

Export Citation Format

Share Document