Attentional Effects on Adaptation of Rotary Motion in the Plane

Perception ◽  
1993 ◽  
Vol 22 (8) ◽  
pp. 947-961 ◽  
Author(s):  
Gordon L Shulman

The effect of attention on the adaptation effects produced by stimuli rotating in the picture plane was examined in five experiments. In experiment 1, subjects performed a task either on a rotating adapting stimulus or on an irrelevant distractor stimulus. Adaptation of a subsequent ambiguous test stimulus was greater when the adapting stimulus was attended than when the irrelevant stimulus was attended. In experiments 2, 3, and 5, two adapting stimuli were presented, rotating in opposite directions, and subjects attended to one or the other. The direction of rotation of the ambiguous test stimulus depended on which adapting stimulus was attended. In experiment 4, the influence of eye movements in producing adaptation in ambiguous motion displays was determined by contrasting the effects of adaptation produced by dual adaptation stimuli rotating in the same or opposite direction. Adaptation effects were not predicted by eye movement hypotheses.

1999 ◽  
Vol 81 (5) ◽  
pp. 2538-2557 ◽  
Author(s):  
Chiju Chen-Huang ◽  
Robert A. McCrea

Effects of viewing distance on the responses of vestibular neurons to combined angular and linear vestibular stimulation. The firing behavior of 59 horizontal canal–related secondary vestibular neurons was studied in alert squirrel monkeys during the combined angular and linear vestibuloocular reflex (CVOR). The CVOR was evoked by positioning the animal’s head 20 cm in front of, or behind, the axis of rotation during whole body rotation (0.7, 1.9, and 4.0 Hz). The effect of viewing distance was studied by having the monkeys fixate small targets that were either near (10 cm) or far (1.3–1.7 m) from the eyes. Most units (50/59) were sensitive to eye movements and were monosynaptically activated after electrical stimulation of the vestibular nerve (51/56 tested). The responses of eye movement–related units were significantly affected by viewing distance. The viewing distance–related change in response gain of many eye-head-velocity and burst-position units was comparable with the change in eye movement gain. On the other hand, position-vestibular-pause units were approximately half as sensitive to changes in viewing distance as were eye movements. The sensitivity of units to the linear vestibuloocular reflex (LVOR) was estimated by subtraction of angular vestibuloocular reflex (AVOR)–related responses recorded with the head in the center of the axis of rotation from CVOR responses. During far target viewing, unit sensitivity to linear translation was small, but during near target viewing the firing rate of many units was strongly modulated. The LVOR responses and viewing distance–related LVOR responses of most units were nearly in phase with linear head velocity. The signals generated by secondary vestibular units during voluntary cancellation of the AVOR and CVOR were comparable. However, unit sensitivity to linear translation and angular rotation were not well correlated either during far or near target viewing. Unit LVOR responses were also not well correlated with their sensitivity to smooth pursuit eye movements or their sensitivity to viewing distance during the AVOR. On the other hand there was a significant correlation between static eye position sensitivity and sensitivity to viewing distance. We conclude that secondary horizontal canal–related vestibuloocular pathways are an important part of the premotor neural substrate that produces the LVOR. The otolith sensory signals that appear on these pathways have been spatially and temporally transformed to match the angular eye movement commands required to stabilize images at different distances. We suggest that this transformation may be performed by the circuits related to temporal integration of the LVOR.


2012 ◽  
Vol 25 (0) ◽  
pp. 171-172
Author(s):  
Fumio Mizuno ◽  
Tomoaki Hayasaka ◽  
Takami Yamaguchi

Humans have the capability to flexibly adapt to visual stimulation, such as spatial inversion in which a person wears glasses that display images upside down for long periods of time (Ewert, 1930; Snyder and Pronko, 1952; Stratton, 1887). To investigate feasibility of extension of vision and the flexible adaptation of the human visual system with binocular rivalry, we developed a system that provides a human user with the artificial oculomotor ability to control their eyes independently for arbitrary directions, and we named the system Virtual Chameleon having to do with Chameleons (Mizuno et al., 2010, 2011). The successful users of the system were able to actively control visual axes by manipulating 3D sensors held by their both hands, to watch independent fields of view presented to the left and right eyes, and to look around as chameleons do. Although it was thought that those independent fields of view provided to the user were formed by eye movements control corresponding to pursuit movements on human, the system did not have control systems to perform saccadic movements and compensatory movements as numerous animals including human do. Fluctuations in dominance and suppression with binocular rivalry are irregular, but it is possible to bias these fluctuations by boosting the strength of one rival image over the other (Blake and Logothetis, 2002). It was assumed that visual stimuli induced by various eye movements affect predominance. Therefore, in this research, we focused on influenced of patterns of eye movements on visual perception with binocular rivalry, and implemented functions to produce saccadic movements in Virtual Chameleon.


Perception ◽  
1989 ◽  
Vol 18 (2) ◽  
pp. 257-264 ◽  
Author(s):  
Catherine Neary ◽  
Arnold J Wilkins

When a rapid eye movement (saccade) is made across material displayed on cathode ray tube monitors with short-persistence phosphors, various perceptual phenomena occur. The phenomena do not occur when the monitor has a long-persistence phosphor. These phenomena were observed for certain spatial arrays, their possible physiological basis noted, and their effect on the control of eye movements examined. When the display consisted simply of two dots, and a saccade was made from one to the other, a transient ghost image was seen just beyond the destination target. When the display consisted of vertical lines, tilting and displacement of the lines occurred. The phenomena were more intrusive for the latter display and there was a significant increase in the number of corrective saccades. These results are interpreted in terms of the effects of fluctuating illumination (and hence phosphor persistence) on saccadic suppression.


Perception ◽  
1979 ◽  
Vol 8 (1) ◽  
pp. 21-30 ◽  
Author(s):  
Keith Rayner

Three broad categories of models of eye movement guidance in reading are described. According to one category, eye movements in reading are not under stimulus or cognitive control; the other two categories indicate that cognitive activities or stimulus characteristics are involved in eye guidance. In this study a number of descriptive analyses of eye movements in reading were carried out. These analyses dealt with fixation locations on letters within words of various lengths, conditional probabilities that a word will be fixated given that a prior word was or was not fixated, and average saccade length as a function of the length of the word to the right of the fixated word. The results of these analyses were supportive of models which suggest that determining where to look next while reading is made on a nonrandom basis.


1978 ◽  
Vol 47 (3) ◽  
pp. 767-776 ◽  
Author(s):  
John A. Allen ◽  
Stephen R. Schroeder ◽  
Patricia G. Ball

Two groups of 10 subjects tracked a segment of the Aetna training film, Traffic Strategy, six times by manipulating the controls of an Aetna Drivo-Trainer station. One group was composed of licensed drivers, the other, nonlicensed. No significant differences were found with respect to: (1) use of the accelerator, (2) frequency of eye movements, (3) length of eye movements, (4) fixation errors, (5) driving errors, or (6) the relationship of control actions to driving errors. Differences were noted with respect to: (1) steering and braking, (2) the effects of practice on control actions and driving errors, and (3) the relationship of amplitude of eye movement to control actions and driving errors. The results are discussed in terms of possible differences in search strategy between experienced and inexperienced drivers.


1994 ◽  
Vol 188 (1) ◽  
pp. 317-331 ◽  
Author(s):  
J Jones

1. The peculiar structure of the stomatopod eye requires it to make complicated movements. These include slow 'scans', which relate to the animal's colour vision system, as well as faster 'saccades'. 2. The myology of the eyecup is investigated and shown to consist of eight individual muscles which are divided, on kinematic grounds, into six functional groups. 3. These groups form three pairs of dominant prime movers, with each having primary control over one of the eye movement axes (longitude, latitude and bearing). This is important as it allows each rotational axis to move independently of the other two. 4. Histochemical typing reveals at least four distinct classes of fibre within each muscle. 5. The relationship between the number of types of fibre and classes of eye movement is discussed, as are the implications of coordinate prime movers for neuromuscular control.


Author(s):  
Masaru Yasuda

Abstract. Differences in perceptional processes between shading responses and achromatic-color responses were examined by comparing eye movements. The following hypotheses were tested. Hypothesis 1: Shading responses, compared to non-shading responses, would show an increased fixation time directed at the inside of the area of shading stimuli and a decreased fixation time directed at the outline. Hypothesis 2: The differences in fixation times proposed in Hypothesis 1 would not be observed between achromatic-color responses and non-achromatic-color responses. Eye movement data of 60 responses produced for the W in Card IV and D1 in Card VI were analyzed. The results indicated that shading responses had significantly longer fixation times directed at the inner area and significantly shorter fixation times directed at the outline, compared to non-shading responses. On the other hand, achromatic-color responses did not show a significant main effect or interaction. The above results supported Hypotheses 1 and 2.


2016 ◽  
Vol 21 (3) ◽  
pp. 403-426 ◽  
Author(s):  
Sungmook Choi

Research to date suggests that textual enhancement may positively affect the learning of multiword combinations known as collocations, but may impair recall of unenhanced text. However, the attentional mechanisms underlying such effects remain unclear. In this study, 38 undergraduate students were divided into two groups: one read a text containing typographically enhanced collocations (ET group) and the other read the same text with unenhanced collocations (the baseline text, or BT group). While reading, participants’ eye movements were recorded with an eye-tracker. Results showed that the ET group spent significantly longer time processing target collocations, and performed better than the BT group in a post-reading collocation test. However, apart from the enhanced collocations, the ET group recalled significantly less unenhanced text than the BT group. Further investigation of eye fixation data showed that the ET group spent substantially longer time processing collocations which, according to a pretest, they were not familiar with than did the BT group, whereas the two groups did not differ significantly in their processing of familiar collocations. Collectively, the results suggest that the trade-off between collocation learning and recall of unenhanced text is due to additional cognitive resources being allocated to enhanced collocations that are new to the reader.


Perception ◽  
10.1068/p3411 ◽  
2002 ◽  
Vol 31 (10) ◽  
pp. 1195-1203 ◽  
Author(s):  
Gerben Rotman ◽  
Eli Brenner ◽  
Jeroen B J Smeets

Human subjects misjudge the position of a target that is flashed during a pursuit eye movement. Their judgments are biased in the direction in which the eyes are moving. We investigated whether this bias can be reduced by making the appearance of the flash more predictable. In the normal condition, subjects pursued a moving target that flashed somewhere along its trajectory. After the presentation, they indicated where they had seen the flash. The mislocalisations in this condition were compared to mislocalisations in conditions in which the subjects were given information about when or where the flash would come. This information consisted of giving two warning flashes spaced at equal intervals before the target flash, of giving two warning beeps spaced at equal intervals before the target flash, or of showing the same stimulus twice. Showing the same stimulus twice significantly reduced the mislocalisation. The other conditions did not. We interpret this as indicating that it is not predictability as such that influences the performance, but the fact that the target appears at a spatially cued position. This was supported by a second experiment, in which we examined whether subjects make smaller mis-judgments when they have to determine the distance between a target flashed during pursuit and a reference seen previously, than when they have to determine the distance between the flashed target and a reference seen afterwards. This was indeed the case, presumably because the reference provided a spatial cue for the flash when it was presented first. We conclude that a spatial cue reduces the mislocalisation of targets that are flashed during pursuit eye movements. The cue does not have to be exactly at the same position as the flash.


1964 ◽  
Vol 9 (4) ◽  
pp. 336-344 ◽  
Author(s):  
E. Llewellyn Thomas ◽  
Eugene Stasiak

The eye-movement patterns of nine hospitalized psychiatric patients were compared with those of ten non-patients when looking at pictures of themselves and others. There were highly significant differences between both the mean fixation times of the two groups and also between the area of the body to which they paid the most attention. The mean fixation times of all the non-patients grouped closely around 0.61 seconds whereas the patients varied between 0.12 seconds and 0.47 seconds and 0.72 seconds and 1.04 seconds. Non-patients looked at all body levels, but spent much more time looking at the face. Patients on the other hand paid much more visual attention to the body and tended to avoid the face. It is suggested that the variability in the fixation times and the tendency to avoid the face reflects a mechanism in the patient which is tending to avoid receiving information about certain aspects of the external world.


Sign in / Sign up

Export Citation Format

Share Document