scholarly journals Chapter 3. Where do we look, what do we see, what do we talk about

In Сhapter 3 we compare how verbal and non-verbal visual information is processed. The questions we addresses are: How do the readers integrate text-figure information when reading and understanding verbal and non-verbal patterns, namely one and the same text in verbal for- mat and infographics? How the way humans perceive visual information determines the way they express it in natural language? How the verbalization affects the oculomotor behavior in visual processing? Our results support the assumption of the Cognitive Theory of Multimedia Learning that integration of verbal and pictural information with each other (a polycode text) helps the learners to understand and memorize the text and makes the comprehension easier. We demonstrate the advantages and disadvantages of the infographics (graphical visual repre- sentations of complex information) and verbal text. Also we discuss the relationship between visual processing of images and their verbalization. On one hand, the characteristics of eye movements when looking at the image determine its subsequent verbal description: the more fixations are made and the longer the gaze is directed to the certain area of the image, the more words are dedicated to this area in the following description. On the other hand, verbalization of the previously seen image affects the parameters of eye movements when re-viewing the same image, resulting with the appearance of the ambient processing pattern (short fixations and long saccades), while the re-viewing without verbalization results with the focal processing pattern (longer fixations and shorter saccades). The results obtained open up prospects for fur- ther research on visual perception and can also be used for computer vision models.

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Angie M Michaiel ◽  
Elliott TT Abe ◽  
Cristopher M Niell

Many studies of visual processing are conducted in constrained conditions such as head- and gaze-fixation, and therefore less is known about how animals actively acquire visual information in natural contexts. To determine how mice target their gaze during natural behavior, we measured head and bilateral eye movements in mice performing prey capture, an ethological behavior that engages vision. We found that the majority of eye movements are compensatory for head movements, thereby serving to stabilize the visual scene. During movement, however, periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Notably, these saccades do not preferentially target the prey location. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


2020 ◽  
Author(s):  
Han Zhang ◽  
Nicola C Anderson ◽  
Kevin Miller

Recent studies have shown that mind-wandering (MW) is associated with changes in eye movement parameters, but have not explored how MW affects the sequential pattern of eye movements involved in making sense of complex visual information. Eye movements naturally unfold over time and this process may reveal novel information about cognitive processing during MW. The current study used Recurrence Quantification Analysis (Anderson, Bischof, Laidlaw, Risko, & Kingstone, 2013) to describe the pattern of refixations (fixations directed to previously-inspected regions) during MW. Participants completed a real-world scene encoding task and responded to thought probes assessing intentional and unintentional MW. Both types of MW were associated with worse memory of the scenes. Importantly, RQA showed that scanpaths during unintentional MW were more repetitive than during on-task episodes, as indicated by a higher recurrence rate and more stereotypical fixation sequences. This increased repetitiveness suggests an adaptive response to processing failures through re-examining previous locations. Moreover, this increased repetitiveness contributed to fixations focusing on a smaller spatial scale of the stimuli. Finally, we were also able to validate several traditional measures: both intentional and unintentional MW were associated with fewer and longer fixations; Eye-blinking increased numerically during both types of MW but the difference was only significant for unintentional MW. Overall, the results advanced our understanding of how visual processing is affected during MW by highlighting the sequential aspect of eye movements.


Author(s):  
Angie M. Michaiel ◽  
Elliott T.T. Abe ◽  
Cristopher M. Niell

ABSTRACTMany studies of visual processing are conducted in unnatural conditions, such as head- and gaze-fixation. As this radically limits natural exploration of the visual environment, there is much less known about how animals actively use their sensory systems to acquire visual information in natural, goal-directed contexts. Recently, prey capture has emerged as an ethologically relevant behavior that mice perform without training, and that engages vision for accurate orienting and pursuit. However, it is unclear how mice target their gaze during such natural behaviors, particularly since, in contrast to many predatory species, mice have a narrow binocular field and lack foveate vision that would entail fixing their gaze on a specific point in the visual field. Here we measured head and bilateral eye movements in freely moving mice performing prey capture. We find that the majority of eye movements are compensatory for head movements, thereby acting to stabilize the visual scene. During head turns, however, these periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Analysis of eye movements relative to the cricket position shows that the saccades do not preferentially select a specific point in the visual scene. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings help relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


2021 ◽  
Vol 7 (30) ◽  
pp. eabf2218 ◽  
Author(s):  
Richard Schweitzer ◽  
Martin Rolfs

Rapid eye movements (saccades) incessantly shift objects across the retina. To establish object correspondence, the visual system is thought to match surface features of objects across saccades. Here, we show that an object’s intrasaccadic retinal trace—a signal previously considered unavailable to visual processing—facilitates this match making. Human observers made saccades to a cued target in a circular stimulus array. Using high-speed visual projection, we swiftly rotated this array during the eyes’ flight, displaying continuous intrasaccadic target motion. Observers’ saccades landed between the target and a distractor, prompting secondary saccades. Independently of the availability of object features, which we controlled tightly, target motion increased the rate and reduced the latency of gaze-correcting saccades to the initial presaccadic target, in particular when the target’s stimulus features incidentally gave rise to efficient motion streaks. These results suggest that intrasaccadic visual information informs the establishment of object correspondence and jump-starts gaze correction.


1997 ◽  
Vol 20 (4) ◽  
pp. 758-763
Author(s):  
Dana H. Ballard ◽  
Mary M. Hayhoe ◽  
Polly K. Pook ◽  
Rajesh P. N. Rao

The majority of commentators agree that the time to focus on embodiment has arrived and that the disembodied approach that was taken from the birth of artificial intelligence is unlikely to provide a satisfactory account of the special features of human intelligence. In our Response, we begin by addressing the general comments and criticisms directed at the emerging enterprise of deictic and embodied cognition. In subsequent sections we examine the topics that constitute the core of the commentaries: embodiment mechanisms, dorsal and ventral visual processing, eye movements, and learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mohammad R. Saeedpour-Parizi ◽  
Shirin E. Hassan ◽  
Ariful Azad ◽  
Kelly J. Baute ◽  
Tayebeh Baniasadi ◽  
...  

AbstractThis study examined how people choose their path to a target, and the visual information they use for path planning. Participants avoided stepping outside an avoidance margin between a stationary obstacle and the edge of a walkway as they walked to a bookcase and picked up a target from different locations on a shelf. We provided an integrated explanation for path selection by combining avoidance margin, deviation angle, and distance to the obstacle. We found that the combination of right and left avoidance margins accounted for 26%, deviation angle accounted for 39%, and distance to the obstacle accounted for 35% of the variability in decisions about the direction taken to circumvent an obstacle on the way to a target. Gaze analysis findings showed that participants directed their gaze to minimize the uncertainty involved in successful task performance and that gaze sequence changed with obstacle location. In some cases, participants chose to circumvent the obstacle on a side for which the gaze time was shorter, and the path was longer than for the opposite side. Our results of a path selection judgment test showed that the threshold for participants abandoning their preferred side for circumventing the obstacle was a target location of 15 cm to the left of the bookcase shelf center.


2021 ◽  
Vol 125 (5) ◽  
pp. 1552-1576
Author(s):  
David Souto ◽  
Dirk Kerzel

People’s eyes are directed at objects of interest with the aim of acquiring visual information. However, processing this information is constrained in capacity, requiring task-driven and salience-driven attentional mechanisms to select few among the many available objects. A wealth of behavioral and neurophysiological evidence has demonstrated that visual selection and the motor selection of saccade targets rely on shared mechanisms. This coupling supports the premotor theory of visual attention put forth more than 30 years ago, postulating visual selection as a necessary stage in motor selection. In this review, we examine to which extent the coupling of visual and motor selection observed with saccades is replicated during ocular tracking. Ocular tracking combines catch-up saccades and smooth pursuit to foveate a moving object. We find evidence that ocular tracking requires visual selection of the speed and direction of the moving target, but the position of the motion signal may not coincide with the position of the pursuit target. Further, visual and motor selection can be spatially decoupled when pursuit is initiated (open-loop pursuit). We propose that a main function of coupled visual and motor selection is to serve the coordination of catch-up saccades and pursuit eye movements. A simple race-to-threshold model is proposed to explain the variable coupling of visual selection during pursuit, catch-up and regular saccades, while generating testable predictions. We discuss pending issues, such as disentangling visual selection from preattentive visual processing and response selection, and the pinpointing of visual selection mechanisms, which have begun to be addressed in the neurophysiological literature.


1981 ◽  
Vol 53 (2) ◽  
pp. 623-632 ◽  
Author(s):  
Cris W. Johnston ◽  
Francis J. Pirozzolo

Eye movements were recorded using an infra-red reflection method from two female subjects while they took the Peabody Picture Vocabulary Test. The purpose of the study was to investigate the manner in which oculomotor behavior may characterize an individual's verbal-cognitive ability, and to study processing and evaluating visual information. Correct responses on the test were best associated with relatively high fixation density, i.e., frequency, for the chosen item compared to alternative selections. When the chosen item was an incorrect response die most predictive measure was that the chosen item received the longest duration of fixation. Less useful measures studied were mean duration of fixation and total time spent looking at each alternative (gaze time). Upon exposure of the test items, the initial fixation was on the left and the initial direction of eye movement was clockwise. Based on a sequential “scan pattern” analysis of location, frequency, and duration of fixation, other evidence of psycho-oculomotor strategies was not observed. It is suggested that a trade-off may exist between the various parameters of oculomotor behavior and that perhaps by some unique combination and analysis of selected measures it would be possible to further elucidate how eye movements reflect cognitive processes.


2010 ◽  
Vol 3 (5) ◽  
Author(s):  
Mauro Cherubini ◽  
Marc-Antoine Nüssli ◽  
Pierre Dillenbourg

Little is known of the interplay between deixis and eye movements in remote collaboration. This paper presents quantitative results from an experiment where participant pairs had to collaborate at a distance using chat tools that differed in the way messages could be enriched with spatial information from the map in the shared workspace. We studied how the availability of what we defined as an Explicit Referencing mechanism (ER) affected the coordination of the eye movements of the participants. The manipulation of the availability of ER did not produce any significant difference on the gaze coupling. However, we found a primary relation between the pairs recurrence of eye movements and their task performance. Implications for design are discussed.


2015 ◽  
Vol 32 ◽  
Author(s):  
DAVID MELCHER ◽  
MARIA CONCETTA MORRONE

AbstractA basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.


Sign in / Sign up

Export Citation Format

Share Document