scholarly journals The Roles of Different Spatial Frequency Channels in Real-World Visual Motion Perception

2018 ◽  
Author(s):  
Cong Shi ◽  
Shrinivas Pundlik ◽  
Gang Luo

AbstractSpeed perception is an important task performed by our visual system in various daily life tasks. In various psychophysical tests, relationship between spatial frequency, temporal frequency, and speed has been examined in human subjects. The role of vision impairment in speed perception has also been previously examined. In this work, we examine the inter-relationship between speed, spatial frequency, low vision conditions, and the type of input motion stimuli in motion perception accuracy. For this purpose, we propose a computational model for speed perception and evaluate it in custom generated natural and stochastic sequences by simulating low-vision conditions (low pass filtering at different cutoff frequencies) as well as complementary vision conditions (high pass versions at the same cutoff frequencies). Our results show that low frequency components are critical for accurate speed perception, whereas high frequencies do not play any important role in speed estimation. Since perception of low frequencies may not be impaired in visual acuity loss, speed perception was not found to be impaired in low vision conditions compared to normal vision condition. We also report significant differences between natural and stochastic stimuli, notably an increase in speed estimation error when using stochastic stimuli compared to natural sequences, emphasizing the use of natural stimuli when performing future psychophysical studies for speed perception.

2017 ◽  
Vol 284 (1858) ◽  
pp. 20170673 ◽  
Author(s):  
Irene Senna ◽  
Cesare V. Parise ◽  
Marc O. Ernst

Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities.


2020 ◽  
Author(s):  
Reuben Rideaux ◽  
Andrew E Welchman

ABSTRACTVisual motion perception underpins behaviours ranging from navigation to depth perception and grasping. Our limited access to biological systems constrain our understanding of how motion is processed within the brain. Here we explore properties of motion perception in biological systems by training a neural network (‘MotionNetxy’) to estimate the velocity image sequences. The network recapitulates key characteristics of motion processing in biological brains, and we use our complete access to its structure explore and understand motion (mis)perception at the computational-, neural-, and perceptual-levels. First, we find that the network recapitulates the biological response to reverse-phi motion in terms of direction. We further find that it overestimates the speed of slow reverse-phi motion while underestimating the speed of fast reverse-phi motion because of the correlation between reverse-phi motion and the spatiotemporal receptive fields tuned to motion in opposite directions. Second, we find that the distribution of spatiotemporal tuning properties in the V1 and MT layers of the network are similar to those observed in biological systems. We then show that compared to MT units tuned to fast speeds, those tuned to slow speeds primarily receive input from V1 units tuned to high spatial frequency and low temporal frequency. Third, we find that there is a positive correlation between the pattern-motion and speed selectivity of MT units. Finally, we show that the network captures human underestimation of low coherence motion stimuli, and that this is due to pooling of noise and signal motion. These findings provide biologically plausible explanations for well-known phenomena, and produce concrete predictions for future psychophysical and neurophysiological experiments.


1998 ◽  
Vol 79 (3) ◽  
pp. 1481-1493 ◽  
Author(s):  
Michael R. Ibbotson ◽  
Colin W. G. Clifford ◽  
Richard F. Mark

Ibbotson, Michael R., Colin W. G. Clifford, and Richard F. Mark. Adaptation to visual motion in directional neurons of the nucleus of the optic tract. J. Neurophysiol. 79: 1481–1493, 1998. Extracellular recordings of action potentials were made from directional neurons in the nucleus of the optic tract (NOT) of the wallaby, Macropus eugenii, while stimulating with moving sine-wave gratings. When a grating was moved at a constant velocity in the preferred direction through a neuron's receptive field, the firing rate increased rapidly and then declined exponentially until reaching a steady-state level. The decline in response is called motion adaptation. The rate of adaptation increased as the temporal frequency of the drifting grating increased, up to the frequency that elicited the maximum firing rate. Beyond this frequency, the adaptation rate decreased. When the adapting grating's spatial frequency was varied, such that response magnitudes were significantly different, the maximum adaptation rate occurred at similar temporal frequencies. Hence the temporal frequency of the stimulus is a major parameter controlling the rate of adaptation. In most neurons, the temporal frequency response functions measured after adaptation were shifted to the right when compared with those obtained in the unadapted state. Further insight into the adaptation process was obtained by measuring the responses of the cells to grating displacements within one frame (10.23 ms). Such impulsive stimulus movements of less than a one-quarter cycle elicited a response that rose rapidly to a maximum and then declined exponentially to the spontaneous firing rate in several seconds. The level of adaptation was demonstrated by observing how the time constants of the exponentials varied as a function of the temporal frequency of a previously presented moving grating. When plotted as functions of adapting frequency, time constants formed a U-shaped curve. The shortest time constants occurred at similar temporal frequencies, regardless of changes in spatial frequency, even when the change in spatial frequency resulted in large differences in response magnitude during the adaptation period. The strongest adaptation occurred when the adapting stimulus moved in the neuron's preferred direction. Stimuli that moved in the antipreferred direction or flickered had an adapting influence on the responses to subsequent impulsive movements, but the effect was far smaller than that elicited by preferred direction adaptation. Adaptation in one region of the receptive field did not affect the responses elicited by subsequent stimulation in nonoverlapping regions of the field. Adaptation is a significant property of NOT neurons and probably acts to expand their temporal resolving power.


2018 ◽  
Vol 30 (12) ◽  
pp. 3355-3392 ◽  
Author(s):  
Jonathan Vacher ◽  
Andrew Isaac Meso ◽  
Laurent U. Perrinet ◽  
Gabriel Peyré

A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. We extend previous contributions by detailing three equivalent formulations of this texture model. First, the composite dynamic textures are constructed by the random aggregation of warped patterns, which can be viewed as three-dimensional gaussian fields. Second, these textures are cast as solutions to a stochastic partial differential equation (sPDE). This essential step enables real-time, on-the-fly texture synthesis using time-discretized autoregressive processes. It also allows for the derivation of a local motion-energy model, which corresponds to the log likelihood of the probability density. The log likelihoods are essential for the construction of a Bayesian inference framework. We use the dynamic texture model to psychophysically probe speed perception in humans using zoom-like changes in the spatial frequency content of the stimulus. The human data replicate previous findings showing perceived speed to be positively biased by spatial frequency increments. A Bayesian observer who combines a gaussian likelihood centered at the true speed and a spatial frequency dependent width with a “slow-speed prior” successfully accounts for the perceptual bias. More precisely, the bias arises from a decrease in the observer's likelihood width estimated from the experiments as the spatial frequency increases. Such a trend is compatible with the trend of the dynamic texture likelihood width.


2019 ◽  
Author(s):  
Andrew D Zaharia ◽  
Robbe L T Goris ◽  
J Anthony Movshon ◽  
Eero P Simoncelli

AbstractMotion selectivity in primary visual cortex (V1) is approximately separable in orientation, spatial frequency, and temporal frequency (“frequency-separable”). Models for area MT neurons posit that their selectivity arises by combining direction-selective V1 afferents whose tuning is organized around a tilted plane in the frequency domain, specifying a particular direction and speed (“velocity-separable”). This construction explains “pattern direction selective” MT neurons, which are velocity-selective but relatively invariant to spatial structure, including spatial frequency, texture and shape. Surprisingly, when tested with single drifting gratings, most MT neurons’ responses are fit equally well by models with either form of separability. However, responses to plaids (sums of two moving gratings) tend to be better described as velocity-separable, especially for pattern neurons. We conclude that direction selectivity in MT is primarily computed by summing V1 afferents, but pattern-invariant velocity tuning for complex stimuli may arise from local, recurrent interactions.Significance StatementHow do sensory systems build representations of complex features from simpler ones? Visual motion representation in cortex is a well-studied example: the direction and speed of moving objects, regardless of shape or texture, is computed from the local motion of oriented edges. Here we quantify tuning properties based on single-unit recordings in primate area MT, then fit a novel, generalized model of motion computation. The model reveals two core properties of MT neurons — speed tuning and invariance to local edge orientation — result from a single organizing principle: each MT neuron combines afferents that represent edge motions consistent with a common velocity, much as V1 simple cells combine thalamic inputs consistent with a common orientation.


2019 ◽  
Vol 5 (1) ◽  
pp. 247-268 ◽  
Author(s):  
Peter Thier ◽  
Akshay Markanday

The cerebellar cortex is a crystal-like structure consisting of an almost endless repetition of a canonical microcircuit that applies the same computational principle to different inputs. The output of this transformation is broadcasted to extracerebellar structures by way of the deep cerebellar nuclei. Visually guided eye movements are accommodated by different parts of the cerebellum. This review primarily discusses the role of the oculomotor part of the vermal cerebellum [the oculomotor vermis (OMV)] in the control of visually guided saccades and smooth-pursuit eye movements. Both types of eye movements require the mapping of retinal information onto motor vectors, a transformation that is optimized by the OMV, considering information on past performance. Unlike the role of the OMV in the guidance of eye movements, the contribution of the adjoining vermal cortex to visual motion perception is nonmotor and involves a cerebellar influence on information processing in the cerebral cortex.


Sign in / Sign up

Export Citation Format

Share Document