scholarly journals The movement of motion-defined contours can bias perceived position

2009 ◽  
Vol 5 (2) ◽  
pp. 270-273 ◽  
Author(s):  
Szonya Durant ◽  
Johannes M Zanker

Illusory position shifts induced by motion suggest that motion processing can interfere with perceived position. This may be because accurate position representation is lost during successive visual processing steps. We found that complex motion patterns, which can only be extracted at a global level by pooling and segmenting local motion signals and integrating over time, can influence perceived position. We used motion-defined Gabor patterns containing motion-defined boundaries, which themselves moved over time. This ‘motion-defined motion’ induced position biases of up to 0.5°, much larger than has been found with luminance-defined motion. The size of the shift correlated with how detectable the motion-defined motion direction was, suggesting that the amount of bias increased with the magnitude of this complex directional signal. However, positional shifts did occur even when participants were not aware of the direction of the motion-defined motion. The size of the perceptual position shift was greatly reduced when the position judgement was made relative to the location of a static luminance-defined square, but not eliminated. These results suggest that motion-induced position shifts are a result of general mechanisms matching dynamic object properties with spatial location.

2017 ◽  
Vol 118 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Bastian Schledde ◽  
F. Orlando Galashan ◽  
Magdalena Przybyla ◽  
Andreas K. Kreiter ◽  
Detlef Wegener

Nonspatially selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA), in particular, is a well-studied process that dynamically and selectively addresses neurons preferentially processing the attended feature attribute (e.g., leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g., motion), but experimental evidence for a feature dimension-specific attentional modulation on a cellular level is lacking. Therefore, we investigated neuronal activity in macaque motion-selective mediotemporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced when the task requires perceptual sensitivity to motion. In line with this, we found that mean firing rates were higher in the motion task and that response variability and latency were lower compared with values in the color task, despite identical visual stimulation. This task-specific, dimension-based modulation of motion processing emerged already in the absence of visual input, was independent of the relation between the attended and stimulating motion direction, and was accompanied by a spatially global reduction of neuronal variability. The results provide single-cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class. NEW & NOTEWORTHY Cortical processing serving visual perception prioritizes information according to current task requirements. We provide evidence in favor of a dimension-based attentional mechanism addressing all neurons that process visual information in the task-relevant feature domain. Behavioral tasks required monkeys to attend either color or motion, causing modulations of response strength, variability, latency, and baseline activity of motion-selective monkey area MT neurons irrespective of the attended motion direction but specific to the attended feature dimension.


2019 ◽  
Vol 6 (3) ◽  
pp. 190114
Author(s):  
William Curran ◽  
Lee Beattie ◽  
Delfina Bilello ◽  
Laura A. Coulter ◽  
Jade A. Currie ◽  
...  

Prior experience influences visual perception. For example, extended viewing of a moving stimulus results in the misperception of a subsequent stimulus's motion direction—the direction after-effect (DAE). There has been an ongoing debate regarding the locus of the neural mechanisms underlying the DAE. We know the mechanisms are cortical, but there is uncertainty about where in the visual cortex they are located—at relatively early local motion processing stages, or at later global motion stages. We used a unikinetic plaid as an adapting stimulus, then measured the DAE experienced with a drifting random dot test stimulus. A unikinetic plaid comprises a static grating superimposed on a drifting grating of a different orientation. Observers cannot see the true motion direction of the moving component; instead they see pattern motion running parallel to the static component. The pattern motion of unikinetic plaids is encoded at the global processing level—specifically, in cortical areas MT and MST—and the local motion component is encoded earlier. We measured the direction after-effect as a function of the plaid's local and pattern motion directions. The DAE was induced by the plaid's pattern motion, but not by its component motion. This points to the neural mechanisms underlying the DAE being located at the global motion processing level, and no earlier than area MT.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jinglin Li ◽  
Miriam Niemeier ◽  
Roland Kern ◽  
Martin Egelhaaf

Motion adaptation has been attributed in flying insects a pivotal functional role in spatial vision based on optic flow. Ongoing motion enhances in the visual pathway the representation of spatial discontinuities, which manifest themselves as velocity discontinuities in the retinal optic flow pattern during translational locomotion. There is evidence for different spatial scales of motion adaptation at the different visual processing stages. Motion adaptation is supposed to take place, on the one hand, on a retinotopic basis at the level of local motion detecting neurons and, on the other hand, at the level of wide-field neurons pooling the output of many of these local motion detectors. So far, local and wide-field adaptation could not be analyzed separately, since conventional motion stimuli jointly affect both adaptive processes. Therefore, we designed a novel stimulus paradigm based on two types of motion stimuli that had the same overall strength but differed in that one led to local motion adaptation while the other did not. We recorded intracellularly the activity of a particular wide-field motion-sensitive neuron, the horizontal system equatorial cell (HSE) in blowflies. The experimental data were interpreted based on a computational model of the visual motion pathway, which included the spatially pooling HSE-cell. By comparing the difference between the recorded and modeled HSE-cell responses induced by the two types of motion adaptation, the major characteristics of local and wide-field adaptation could be pinpointed. Wide-field adaptation could be shown to strongly depend on the activation level of the cell and, thus, on the direction of motion. In contrast, the response gain is reduced by local motion adaptation to a similar extent independent of the direction of motion. This direction-independent adaptation differs fundamentally from the well-known adaptive adjustment of response gain according to the prevailing overall stimulus level that is considered essential for an efficient signal representation by neurons with a limited operating range. Direction-independent adaptation is discussed to result from the joint activity of local motion-sensitive neurons of different preferred directions and to lead to a representation of the local motion direction that is independent of the overall direction of global motion.


2018 ◽  
Author(s):  
Ruyuan Zhang ◽  
Duje Tadin

ABSTRACTVisual perceptual learning (VPL) can lead to long-lasting perceptual improvements. While the efficacy of VPL is well established, there is still a considerable debate about what mechanisms underlie the effects of VPL. Much of this debate concentrates on where along the visual processing hierarchy behaviorally relevant plasticity takes place. Here, we aimed to tackle this question in context of motion processing, a domain where links between behavior and processing hierarchy are well established. Specifically, we took advantage of an established transition from component-dependent representations at the earliest level to pattern-dependent representations at the middle-level of cortical motion processing. We trained two groups of participants on the same motion direction identification task using either grating or plaid stimuli. A set of pre- and post-training tests was used to determine the degree of learning specificity and generalizability. This approach allowed us to disentangle contributions from both low- and mid-level motion processing, as well as high-level cognitive changes. We observed a complete bi-directional transfer of learning between component and pattern stimuli as long as they shared the same apparent motion direction. This result indicates learning-induced plasticity at intermediate levels of motion processing. Moreover, we found that motion VPL is specific to the trained stimulus direction, speed, size, and contrast, highlighting the pivotal role of basic visual features in VPL, and diminishing the possibility of non-sensory decision-level enhancements. Taken together, our study psychophysically examined a variety of factors mediating motion VPL, and demonstrated that motion VPL most likely alters visual computation in the middle stage of motion processing.


Author(s):  
Martin Lages ◽  
Suzanne Heron ◽  
Hongfang Wang

The authors discuss local constraints for the perception of three-dimensional (3D) binocular motion in a geometric-probabilistic framework. It is shown that Bayesian models of binocular 3D motion can explain perceptual bias under uncertainty and predict perceived velocity under ambiguity. The models exploit biologically plausible constraints of local motion and disparity processing in a binocular viewing geometry. Results from computer simulations and psychophysical experiments support the idea that local constraints of motion and disparity processing are combined late in the visual processing hierarchy to establish perceived 3D motion direction.


2019 ◽  
Author(s):  
Jon Cafaro ◽  
Joel Zylberberg ◽  
Greg Field

AbstractSimple stimuli have been critical to understanding neural population codes in sensory systems. Yet it remains necessary to determine the extent to which this understanding generalizes to more complex conditions. To explore this problem, we measured how populations of direction-selective ganglion cells (DSGCs) from mouse retina respond to a global motion stimulus with its direction and speed changing dynamically. We then examined the encoding and decoding of motion direction in both individual and populations of DSGCs. Individual cells integrated global motion over ~200 ms, and responses were tuned to direction. However, responses were sparse and broadly tuned, which severely limited decoding performance from small DSGC populations. In contrast, larger populations compensated for response sparsity, enabling decoding with high temporal precision (<100 ms). At these timescales, correlated spiking was minimal and had little impact on decoding performance, unlike results obtained using simpler local motion stimuli decoded over longer timescales. We use these data to define different DSGC population decoding regimes that utilize or mitigate correlated spiking to achieve high spatial versus high temporal resolution.


2020 ◽  
Vol 38 (5) ◽  
pp. 395-405
Author(s):  
Luca Battaglini ◽  
Federica Mena ◽  
Clara Casco

Background: To study motion perception, a stimulus consisting of a field of small, moving dots is often used. Generally, some of the dots coherently move in the same direction (signal) while the rest move randomly (noise). A percept of global coherent motion (CM) results when many different local motion signals are combined. CM computation is a complex process that requires the integrity of the middle-temporal area (MT/V5) and there is evidence that increasing the number of dots presented in the stimulus makes such computation more efficient. Objective: In this study, we explored whether anodal direct current stimulation (tDCS) over MT/V5 would increase individual performance in a CM task at a low signal-to-noise ratio (SNR, i.e. low percentage of coherent dots) and with a target consisting of a large number of moving dots (high dot numerosity, e.g. >250 dots) with respect to low dot numerosity (<60 dots), indicating that tDCS favour the integration of local motion signal into a single global percept (global motion). Method: Participants were asked to perform a CM detection task (two-interval forced-choice, 2IFC) while they received anodal, cathodal, or sham stimulation on three different days. Results: Our findings showed no effect of cathodal tDCS with respect to the sham condition. Instead, anodal tDCS improves performance, but mostly when dot numerosity is high (>400 dots) to promote efficient global motion processing. Conclusions: The present study suggests that tDCS may be used under appropriate stimulus conditions (low SNR and high dot numerosity) to boost the global motion processing efficiency, and may be useful to empower clinical protocols to treat visual deficits.


Cephalalgia ◽  
2011 ◽  
Vol 31 (11) ◽  
pp. 1199-1210 ◽  
Author(s):  
Kathryn E Webster ◽  
J Edwin Dickinson ◽  
Josephine Battista ◽  
Allison M McKendrick ◽  
David R Badcock

Aim: This study aimed to revisit previous findings of superior processing of motion direction in migraineurs with a more stringent direction discrimination task and to investigate whether increased internal noise can account for motion processing deficits in migraineurs. Methods: Groups of 13 migraineurs (4 with aura, 9 without aura) and 15 headache-free controls completed three psychophysical tasks: one detecting coherence in a motion stimulus, one discriminating the spiral angle in a glass pattern and another discriminating the spiral angle in a global-motion task. Internal noise estimates were obtained for all tasks using an N-pass method. Results: Consistent with previous research, migraineurs had higher motion coherence thresholds than controls. However, there were no significant performance differences on the spiral global-motion and global-form tasks. There was no significant group difference in internal noise estimates associated with any of the tasks. Conclusions: The results from this study suggest that variation in internal noise levels is not the mechanism driving motion coherence threshold differences in migraine. Rather, we argue that motion processing deficits may result from cortical changes leading to less efficient extraction of global-motion signals from noise.


2007 ◽  
Vol 7 (10) ◽  
pp. 10 ◽  
Author(s):  
Paul F. Bulakowski ◽  
David W. Bressler ◽  
David Whitney

2011 ◽  
Vol 23 (11) ◽  
pp. 2868-2914 ◽  
Author(s):  
Florian Raudies ◽  
Ennio Mingolla ◽  
Heiko Neumann

Motion transparency occurs when multiple coherent motions are perceived in one spatial location. Imagine, for instance, looking out of the window of a bus on a bright day, where the world outside the window is passing by and movements of passengers inside the bus are reflected in the window. The overlay of both motions at the window leads to motion transparency, which is challenging to process. Noisy and ambiguous motion signals can be reduced using a competition mechanism for all encoded motions in one spatial location. Such a competition, however, leads to the suppression of multiple peak responses that encode different motions, as only the strongest response tends to survive. As a solution, we suggest a local center-surround competition for population-encoded motion directions and speeds. Similar motions are supported, and dissimilar ones are separated, by representing them as multiple activations, which occurs in the case of motion transparency. Psychophysical findings, such as motion attraction and repulsion for motion transparency displays, can be explained by this local competition. Besides this local competition mechanism, we show that feedback signals improve the processing of motion transparency. A discrimination task for transparent versus opaque motion is simulated, where motion transparency is generated by superimposing large field motion patterns of either varying size or varying coherence of motion. The model’s perceptual thresholds with and without feedback are calculated. We demonstrate that initially weak peak responses can be enhanced and stabilized through modulatory feedback signals from higher stages of processing.


Sign in / Sign up

Export Citation Format

Share Document