Eye Guidance in Reading: Fixation Locations within Words

Perception ◽  
1979 ◽  
Vol 8 (1) ◽  
pp. 21-30 ◽  
Author(s):  
Keith Rayner

Three broad categories of models of eye movement guidance in reading are described. According to one category, eye movements in reading are not under stimulus or cognitive control; the other two categories indicate that cognitive activities or stimulus characteristics are involved in eye guidance. In this study a number of descriptive analyses of eye movements in reading were carried out. These analyses dealt with fixation locations on letters within words of various lengths, conditional probabilities that a word will be fixated given that a prior word was or was not fixated, and average saccade length as a function of the length of the word to the right of the fixated word. The results of these analyses were supportive of models which suggest that determining where to look next while reading is made on a nonrandom basis.

1981 ◽  
Vol 33 (4) ◽  
pp. 351-373 ◽  
Author(s):  
Keith Rayner ◽  
Alexander Pollatsek

In three experiments, subjects read text as their eye movements were monitored and display changes in the text were made contingent upon the eye movements. In one experiment, a window of text moved in synchrony with the eyes. In one condition, the size of the window was constant from fixation to fixation, while in the other condition the size of the window varied from fixation to fixation. In the other experiments, a visual mask was presented at the end of each saccade which delayed the onset of the text, and the length of the delay was varied. The pattern of eye movements was influenced by both the size of the window and the delay of the onset of the text, even when the window size or text delay was varying from fixation to fixation. However, there was also evidence that saccade length was affected by the size of the window on the prior fixation and that certain decisions to move the eye are programmed either before the fixation begins or are programmed during the fixation but without regard to the text fixated on. The results thus provide support for a mixed control model of eye movements in reading, in which decisions about when and where to move the eyes are based on information from the current fixation, the prior fixations, and possibly, other sources as well.


2018 ◽  
Author(s):  
Fatima Maria Felisberti

Visual field asymmetries (VFA) in the encoding of groups rather than individual faces has been rarely investigated. Here, eye movements (dwell time (DT) and fixations (Fix)) were recorded during the encoding of three groups of four faces tagged with cheating, cooperative, or neutral behaviours. Faces in each of the three groups were placed in the upper left (UL), upper right (UR), lower left (LL), or lower right (LR) quadrants. Face recognition was equally high in the three groups. In contrast, the proportion of DT and Fix were higher for faces in the left than the right hemifield and in the upper rather than the lower hemifield. The overall time spent looking at the UL was higher than in the other quadrants. The findings are relevant to the understanding of VFA in face processing, especially groups of faces, and might be linked to environmental cues and/or reading habits.


2020 ◽  
Author(s):  
Šimon Kucharský ◽  
Daan Roelof van Renswoude ◽  
Maartje Eusebia Josefa Raijmakers ◽  
Ingmar Visser

Describing, analyzing and explaining patterns in eye movement behavior is crucial for understanding visual perception. Further, eye movements are increasingly used in informing cognitive process models. In this article, we start by reviewing basic characteristics and desiderata for models of eye movements. Specifically, we argue that there is a need for models combining spatial and temporal aspects of eye-tracking data (i.e., fixation durations and fixation locations), that formal models derived from concrete theoretical assumptions are needed to inform our empirical research, and custom statistical models are useful for detecting specific empirical phenomena that are to be explained by said theory. In this article, we develop a conceptual model of eye movements, or specifically, fixation durations and fixation locations, and from it derive a formal statistical model --- meeting our goal of crafting a model useful in both the theoretical and empirical research cycle. We demonstrate the use of the model on an example of infant natural scene viewing, to show that the model is able to explain different features of the eye movement data, and to showcase how to identify that the model needs to be adapted if it does not agree with the data. We conclude with discussion of potential future avenues for formal eye movement models.


1999 ◽  
Vol 81 (5) ◽  
pp. 2538-2557 ◽  
Author(s):  
Chiju Chen-Huang ◽  
Robert A. McCrea

Effects of viewing distance on the responses of vestibular neurons to combined angular and linear vestibular stimulation. The firing behavior of 59 horizontal canal–related secondary vestibular neurons was studied in alert squirrel monkeys during the combined angular and linear vestibuloocular reflex (CVOR). The CVOR was evoked by positioning the animal’s head 20 cm in front of, or behind, the axis of rotation during whole body rotation (0.7, 1.9, and 4.0 Hz). The effect of viewing distance was studied by having the monkeys fixate small targets that were either near (10 cm) or far (1.3–1.7 m) from the eyes. Most units (50/59) were sensitive to eye movements and were monosynaptically activated after electrical stimulation of the vestibular nerve (51/56 tested). The responses of eye movement–related units were significantly affected by viewing distance. The viewing distance–related change in response gain of many eye-head-velocity and burst-position units was comparable with the change in eye movement gain. On the other hand, position-vestibular-pause units were approximately half as sensitive to changes in viewing distance as were eye movements. The sensitivity of units to the linear vestibuloocular reflex (LVOR) was estimated by subtraction of angular vestibuloocular reflex (AVOR)–related responses recorded with the head in the center of the axis of rotation from CVOR responses. During far target viewing, unit sensitivity to linear translation was small, but during near target viewing the firing rate of many units was strongly modulated. The LVOR responses and viewing distance–related LVOR responses of most units were nearly in phase with linear head velocity. The signals generated by secondary vestibular units during voluntary cancellation of the AVOR and CVOR were comparable. However, unit sensitivity to linear translation and angular rotation were not well correlated either during far or near target viewing. Unit LVOR responses were also not well correlated with their sensitivity to smooth pursuit eye movements or their sensitivity to viewing distance during the AVOR. On the other hand there was a significant correlation between static eye position sensitivity and sensitivity to viewing distance. We conclude that secondary horizontal canal–related vestibuloocular pathways are an important part of the premotor neural substrate that produces the LVOR. The otolith sensory signals that appear on these pathways have been spatially and temporally transformed to match the angular eye movement commands required to stabilize images at different distances. We suggest that this transformation may be performed by the circuits related to temporal integration of the LVOR.


2012 ◽  
Vol 25 (0) ◽  
pp. 171-172
Author(s):  
Fumio Mizuno ◽  
Tomoaki Hayasaka ◽  
Takami Yamaguchi

Humans have the capability to flexibly adapt to visual stimulation, such as spatial inversion in which a person wears glasses that display images upside down for long periods of time (Ewert, 1930; Snyder and Pronko, 1952; Stratton, 1887). To investigate feasibility of extension of vision and the flexible adaptation of the human visual system with binocular rivalry, we developed a system that provides a human user with the artificial oculomotor ability to control their eyes independently for arbitrary directions, and we named the system Virtual Chameleon having to do with Chameleons (Mizuno et al., 2010, 2011). The successful users of the system were able to actively control visual axes by manipulating 3D sensors held by their both hands, to watch independent fields of view presented to the left and right eyes, and to look around as chameleons do. Although it was thought that those independent fields of view provided to the user were formed by eye movements control corresponding to pursuit movements on human, the system did not have control systems to perform saccadic movements and compensatory movements as numerous animals including human do. Fluctuations in dominance and suppression with binocular rivalry are irregular, but it is possible to bias these fluctuations by boosting the strength of one rival image over the other (Blake and Logothetis, 2002). It was assumed that visual stimuli induced by various eye movements affect predominance. Therefore, in this research, we focused on influenced of patterns of eye movements on visual perception with binocular rivalry, and implemented functions to produce saccadic movements in Virtual Chameleon.


1993 ◽  
Vol 46 (1) ◽  
pp. 51-82 ◽  
Author(s):  
Harold Pashler ◽  
Mark Carrier ◽  
James Hoffman

Four dual-task experiments required a speeded manual choice response to a tone in a close temporal proximity to a saccadic eye movement task. In Experiment 1, subjects made a saccade towards a single transient; in Experiment 2, a red and a green colour patch were presented to left and right, and the saccade was to which ever patch was the pre-specified target colour. There was some slowing of the eye movement, but neither task combination showed typical dual-task interference (the “psychological refractory effect”). However, more interference was observed when the direction of the saccade depended on whether a central colour patch was red or green, or when the saccade was directed towards the numerically higher of two large digits presented to the left and the right. Experiment 5 examined a vocal second task, for comparison. The findings might reflect the fact that eye movements can be directed by two separate brain systems–-the superior colliculus and the frontal eye fields; commands from the latter but not the former may be delayed by simultaneous unrelated sensorimotor tasks.


Perception ◽  
1989 ◽  
Vol 18 (2) ◽  
pp. 257-264 ◽  
Author(s):  
Catherine Neary ◽  
Arnold J Wilkins

When a rapid eye movement (saccade) is made across material displayed on cathode ray tube monitors with short-persistence phosphors, various perceptual phenomena occur. The phenomena do not occur when the monitor has a long-persistence phosphor. These phenomena were observed for certain spatial arrays, their possible physiological basis noted, and their effect on the control of eye movements examined. When the display consisted simply of two dots, and a saccade was made from one to the other, a transient ghost image was seen just beyond the destination target. When the display consisted of vertical lines, tilting and displacement of the lines occurred. The phenomena were more intrusive for the latter display and there was a significant increase in the number of corrective saccades. These results are interpreted in terms of the effects of fluctuating illumination (and hence phosphor persistence) on saccadic suppression.


1999 ◽  
Vol 81 (5) ◽  
pp. 2340-2346 ◽  
Author(s):  
Carl R. Olson ◽  
Sonya N. Gettner

Macaque SEF neurons encode object-centered directions of eye movements regardless of the visual attributes of instructional cues. Neurons in the supplementary eye field (SEF) of the macaque monkey exhibit object-centered direction selectivity in the context of a task in which a spot flashed on the right or left end of a sample bar instructs a monkey to make an eye movement to the right or left end of a target bar. To determine whether SEF neurons are selective for the location of the cue, as defined relative to the sample bar, or, alternatively, for the location of the target, as defined relative to the target bar, we carried out recording while monkeys performed a new task. In this task, the color of a cue-spot instructed the monkey to which end of the target bar an eye movement should be made (blue for the left end and yellow for the right end). Object-centered direction selectivity persisted under this condition, indicating that neurons are selective for the location of the target relative to the target bar. However, object-centered signals developed at a longer latency (by ∼200 ms) when the instruction was conveyed by color than when it was conveyed by the location of a spot on a sample bar.


1978 ◽  
Vol 47 (3) ◽  
pp. 767-776 ◽  
Author(s):  
John A. Allen ◽  
Stephen R. Schroeder ◽  
Patricia G. Ball

Two groups of 10 subjects tracked a segment of the Aetna training film, Traffic Strategy, six times by manipulating the controls of an Aetna Drivo-Trainer station. One group was composed of licensed drivers, the other, nonlicensed. No significant differences were found with respect to: (1) use of the accelerator, (2) frequency of eye movements, (3) length of eye movements, (4) fixation errors, (5) driving errors, or (6) the relationship of control actions to driving errors. Differences were noted with respect to: (1) steering and braking, (2) the effects of practice on control actions and driving errors, and (3) the relationship of amplitude of eye movement to control actions and driving errors. The results are discussed in terms of possible differences in search strategy between experienced and inexperienced drivers.


1996 ◽  
Vol 49 (4) ◽  
pp. 940-949 ◽  
Author(s):  
Mary M. Smyth

We have previously argued that rehearsal in spatial working memory is interfered with by spatial attention shifts rather than simply by movements to locations in space (Smyth & Scholey, 1994). It is possible, however, that the stimuli intended to induce attention shifts in our experiments also induced eye movements and interfered either with an overt eye movement rehearsal strategy or with a covert one. In the first experiment reported here, subjects fixated while they maintained a sequence of spatial items in memory before recalling them in order. Fixation did not affect recall, but auditory spatial stimuli presented during the interval did decrease performance, and it was further decreased if the stimuli were categorized as coming from the right or the left. A second experiment investigated the effects of auditory spatial stimuli to which no response was ever required and found that these did not interfere with performance, indicating that it is the spatial salience of targets that leads to interference. This interference from spatial input in the absence of any overt movement of the eyes or limbs is interpreted in terms of shifts of spatial attention or spatial monitoring, which Morris (1989) has suggested affects spatial encoding and which our findings suggest also affects reactivation in rehearsal.


Sign in / Sign up

Export Citation Format

Share Document