Retinal and Extraretinal Information in Movement Perception: How to Invert the Filehne Illusion

Perception ◽  
1987 ◽  
Vol 16 (3) ◽  
pp. 299-308 ◽  
Author(s):  
Alexander H Wertheim

During a pursuit eye movement made in darkness across a small stationary stimulus, the stimulus is perceived as moving in the opposite direction to the eyes. This so-called Filehne illusion is usually explained by assuming that during pursuit eye movements the extraretinal signal (which informs the visual system about eye velocity so that retinal image motion can be interpreted) falls short. A study is reported in which the concept of an extraretinal signal is replaced by the concept of a reference signal, which serves to inform the visual system about the velocity of the retinae in space. Reference signals are evoked in response to eye movements, but also in response to any stimulation that may yield a sensation of self-motion, because during self-motion the retinae also move in space. Optokinetic stimulation should therefore affect reference signal size. To test this prediction the Filehne illusion was investigated with stimuli of different optokinetic potentials. As predicted, with briefly presented stimuli (no optokinetic potential) the usual illusion always occurred. With longer stimulus presentation times the magnitude of the illusion was reduced when the spatial frequency of the stimulus was reduced (increased optokinetic potential). At very low spatial frequencies (strongest optokinetic potential) the illusion was inverted. The significance of the conclusion, that reference signal size increases with increasing optokinetic stimulus potential, is discussed. It appears to explain many visual illusions, such as the movement aftereffect and center–surround induced motion, and it may bridge the gap between direct Gibsonian and indirect inferential theories of motion perception.

2014 ◽  
Vol 112 (10) ◽  
pp. 2470-2480 ◽  
Author(s):  
Andre Kaminiarz ◽  
Anja Schlack ◽  
Klaus-Peter Hoffmann ◽  
Markus Lappe ◽  
Frank Bremmer

The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.


Perception ◽  
1998 ◽  
Vol 27 (10) ◽  
pp. 1153-1176 ◽  
Author(s):  
Michiteru Kitazaki ◽  
Shinsuke Shimojo

The visual system perceptually decomposes retinal image motion into three basic components that are ecologically significant for the human observer: object depth, object motion, and self motion. Using this conceptual framework, we explored the relationship between them by examining perception of objects’ depth order and relative motion during self motion. We found that the visual system obeyed what we call the parallax-sign constraint, but in different ways depending on whether the retinal image motion contained velocity discontinuity or not. When velocity discontinuity existed (eg in dynamic occlusion, transparent motion), the subject perceptually interpreted image motion as relative motion between surfaces with stable depth order. When velocity discontinuity did not exist, he/she perceived depth-order reversal but no relative motion. The results suggest that the existence of surface discontinuity or of multiple surfaces indexed by velocity discontinuity inhibits the reversal of global depth order.


Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 937-949 ◽  
Author(s):  
Takanao Yajima ◽  
Hiroyasu Ujike ◽  
Keiji Uchikawa

The two main questions addressed in this study were (a) what effect does yoking the relative expansion and contraction (EC) of retinal images to forward and backward head movements have on the resultant magnitude and stability of perceived depth, and (b) how does this relative EC image motion interact with the depth cues of motion parallax? Relative EC image motion was produced by moving a small CCD camera toward and away from the stimulus, two random-dot surfaces separated in depth, in synchrony with the observers' forward and backward head movements. Observers viewed the stimuli monocularly, on a helmet-mounted display, while moving their heads at various velocities, including zero velocity. The results showed that (a) the magnitude of perceived depth was smaller with smaller head velocities (<10 cm s−1), including the zero-head-velocity condition, than with a larger velocity (10 cm s−1), and (b) perceived depth, when motion parallax and the EC image motion cues were simultaneously presented, is equal to the greater of the two possible perceived depths produced from either of these two cues alone. The results suggested the role of nonvisual information of self-motion on perceiving depth.


Perception ◽  
1996 ◽  
Vol 25 (7) ◽  
pp. 797-814 ◽  
Author(s):  
Michiteru Kitazaki ◽  
Shinsuke Shimojo

The generic-view principle (GVP) states that given a 2-D image the visual system interprets it as a generic view of a 3-D scene when possible. The GVP was applied to 3-D-motion perception to show how the visual system decomposes retinal image motion into three components of 3-D motion: stretch/shrinkage, rotation, and translation. First, the optical process of retinal image motion was analyzed, and predictions were made based on the GVP in the inverse-optical process. Then experiments were conducted in which the subject judged perception of stretch/shrinkage, rotation in depth, and translation in depth for a moving bar stimulus. Retinal-image parameters—2-D stretch/shrinkage, 2-D rotation, and 2-D translation—were manipulated categorically and exhaustively. The results were highly consistent with the predictions. The GVP seems to offer a broad and general framework for understanding the ambiguity-solving process in motion perception. Its relationship to other constraints such as that of rigidity is discussed.


i-Perception ◽  
10.1068/ic366 ◽  
2011 ◽  
Vol 2 (4) ◽  
pp. 366-366 ◽  
Author(s):  
Keisuke Araki ◽  
Masaya Kato ◽  
Takehiro Nagai ◽  
Kowa Koida ◽  
Shigeki Nakauchi ◽  
...  

2017 ◽  
Vol 284 (1864) ◽  
pp. 20171622 ◽  
Author(s):  
Shane P. Windsor ◽  
Graham K. Taylor

Flying insects use compensatory head movements to stabilize gaze. Like other optokinetic responses, these movements can reduce image displacement, motion and misalignment, and simplify the optic flow field. Because gaze is imperfectly stabilized in insects, we hypothesized that compensatory head movements serve to extend the range of velocities of self-motion that the visual system encodes. We tested this by measuring head movements in hawkmoths Hyles lineata responding to full-field visual stimuli of differing oscillation amplitudes, oscillation frequencies and spatial frequencies. We used frequency-domain system identification techniques to characterize the head's roll response, and simulated how this would have affected the output of the motion vision system, modelled as a computational array of Reichardt detectors. The moths' head movements were modulated to allow encoding of both fast and slow self-motion, effectively quadrupling the working range of the visual system for flight control. By using its own output to drive compensatory head movements, the motion vision system thereby works as an adaptive sensor, which will be especially beneficial in nocturnal species with inherently slow vision. Studies of the ecology of motion vision must therefore consider the tuning of motion-sensitive interneurons in the context of the closed-loop systems in which they function.


1993 ◽  
Vol 10 (4) ◽  
pp. 643-652 ◽  
Author(s):  
Roland Kern ◽  
Hans-Ortwin Nalbach ◽  
Dezsö Varjú

AbstractWalking crabs move their eyes to compensate for retinal image motion only during rotation and not during translation, even when both components are superimposed. We tested in the rock crab, Pachygrapsus marmoratus, whether this ability to decompose optic flow may arise from topographical interactions of local movement detectors. We recorded the optokinetic eye movements of the rock crab in a sinusoidally oscillating drum which carried two 10-deg wide black vertical stripes. Their azimuthal separation varied from 20 to 180 deg, and each two-stripe configuration was presented at different azimuthal positions around the crab. In general, the responses are the stronger the more widely the stripes are separated. Furthermore, the response amplitude depends also strongly on the azimuthal positions of the stripes. We propose a model with excitatory interactions between pairs of movement detectors that quantitatively accounts for the enhanced optokinetic responses to widely separated textured patches in the visual field that move in phase. The interactions take place both within one eye and, predominantly, between both eyes. We conclude that these interactions aid in the detection of rotation.


2016 ◽  
Vol 116 (3) ◽  
pp. 1449-1467 ◽  
Author(s):  
HyungGoo R. Kim ◽  
Xaq Pitkow ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.


Sign in / Sign up

Export Citation Format

Share Document