A Selective History of the Study of Visual Motion Aftereffects

Perception ◽  
1994 ◽  
Vol 23 (10) ◽  
pp. 1111-1134 ◽  
Author(s):  
Nicholas J Wade

The visual motion aftereffect (MAE) was initially described after observation of movements in the natural environment, like those seen in rivers and waterfalls: stationary objects appeared to move briefly in the opposite direction. In the second half of the nineteenth century the MAE was displaced into the laboratory for experimental enquiry with the aid of Plateau's spiral. Such was the interest in the phenomenon that a major review of empirical and theoretical research was written in 1911. In the latter half of the present century novel stimuli (like drifting gratings, isoluminance patterns, spatial and luminance ramps, random-dot kinematograms, and first-order and second-order motions), introduced to study space and motion perception generally, have been applied to examine MAEs. Developing theories of cortical visual processing have drawn upon MAEs to provide a link between psychophysics and physiology; this has been most pronounced in the context of monocular and binocular channels in the visual system, the combination of colour and contour information, and in the cortical sites most associated with motion processing. The relatively unchanging characteristic of the study of MAEs has been the mode of measurement: duration continues to be used as an index of its strength, although measures of threshold elevation and nulling with computer-generated motions are becoming more prevalent. The MAE is a part of the armoury of motion phenomena employed to uncover the mysteries of vision. Over the last 150 years it has proved itself immensely adaptable to the shifts of fashion in visual science, and it is likely to continue in this vein.

2016 ◽  
Author(s):  
Bastian Schledde ◽  
F. Orlando Galashan ◽  
Magdalena Przybyla ◽  
Andreas K. Kreiter ◽  
Detlef Wegener

AbstractNon-spatial selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA) in particular, is a well-studied process that dynamically and selectively enhances neurons preferentially processing the attended feature attribute (e.g. leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g. motion). Yet, evidence for feature dimension-specific attentional modulation on a cellular level is lacking. We here investigate neuronal activity in macaque motion-selective medio-temporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced if the task requires perceptual sensitivity to motion. Despite identical visual stimulation, we found that mean firing rates were higher in the motion task, and response variability and latency were lower as compared to the color task. This task-specific response modulation in the processing of visual motion was independent from the relation between attended and stimulating motion direction. It emerged already in the absence of visual input, and consisted of a spatially global and tuning-independent shift of the MT baseline activity. The results provide single cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class.


Perception ◽  
1994 ◽  
Vol 23 (10) ◽  
pp. 1257-1264 ◽  
Author(s):  
Michael T Swanston

Evidence concerning the origin of the motion aftereffect (MAE) is assessed in terms of a model of levels of representation in visual motion perception proposed by Wade and Swanston. Very few experiments have been designed so as to permit unambiguous conclusions to be drawn. The requirements for such experiments are identified. Whereas retinocentric motion could in principle give rise to the MAE, data are not available which would enable a conclusion to be drawn. There is good evidence for a patterncentric origin, indicating that the MAE is primarily the result of adaptation in the systems responsible for detecting relative visual motion. There is evidence for a further contribution from the process that compensates retinocentric motion for eye movements, in the form of nonveridical information for eye movements. There may also be an effect at the level at which perceived distance and self-movement information are combined with egocentric motion to give a geocentric representation which provides the basis for reports of phenomenal experience. It is concluded that the MAE can be caused by changes in activity at more than one level of representation, and cannot be ascribed to a single underlying process.


2003 ◽  
Vol 14 (4) ◽  
pp. 357-361 ◽  
Author(s):  
Jean Vroomen ◽  
Beatrice de Gelder

In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.


2004 ◽  
Vol 16 (4) ◽  
pp. 528-540 ◽  
Author(s):  
Jeremy B. Wilmer ◽  
Alexandra J. Richardson ◽  
Yue Chen ◽  
John F. Stein

Developmental dyslexia is associated with deficits in the processing of visual motion stimuli, and some evidence suggests that these motion processing deficits are related to various reading subskills deficits. However, little is known about the mechanisms underlying such associations. This study lays a richer groundwork for exploration of such mechanisms by more comprehensively and rigorously characterizing the relationship between motion processing deficits and reading subskills deficits. Thirty-six adult participants, 19 of whom had a history of developmental dyslexia, completed a battery of visual, cognitive, and reading tests. This battery combined motion processing and reading subskills measures used across previous studies and added carefully matched visual processing control tasks. Results suggest that there are in fact two distinct motion processing deficits in developmental dyslexia, rather than one as assumed by previous research, and that each of these deficits is associated with a different type of reading subskills deficit. A deficit in detecting coherent motion is selectively associated with low accuracy on reading subskills tests, and a deficit in discriminating velocities is selectively associated with slow performance on these same tests. In addition, evidence from visual processing control tasks as well as self-reports of ADHD symptoms suggests that these motion processing deficits are specific to the domain of visual motion, and result neither from a broader visual deficit, nor from the sort of generalized attention deficit commonly comorbid with developmental dyslexia. Finally, dissociation between these two motion processing deficits suggests that they may have distinct neural and functional underpinnings. The two distinct patterns of motion processing and reading deficits demonstrated by this study may reflect separable underlying neurocognitive mechanisms of developmental dyslexia.


Author(s):  
Alex S. Mauss ◽  
Alexander Borst

Visual perception seems effortless to us, yet it is the product of elaborate signal processing in intricate brain circuits. Apart from vertebrates, arthropods represent another major animal group with sophisticated visual systems in which the underlying mechanisms can be studied. Arthropods feature identified neurons and other experimental advantages, facilitating an understanding of circuit function at the level of individual neurons and their synaptic interactions. Here, focusing on insect and crustacean species, we summarize and connect our current knowledge in four related areas of research: (1) elementary motion detection in early visual processing; (2) the detection of higher level visual features such as optic flow fields, small target motion and object distance; (3) the integration of such signals with other sensory modalities; and (4) state-dependent visual motion processing.


2020 ◽  
Author(s):  
Stefania Benetti ◽  
Joshua Zonca ◽  
Ambra Ferrari ◽  
Mohamed Rezk ◽  
Giuseppe Rabini ◽  
...  

AbstractIn early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the ‘deaf’ mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the ‘deaf’ motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the ‘deaf’ right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.HighlightsAuditory motion-sensitive regions respond to visual motion in the deafReorganized auditory cortex can discriminate between visual motion trajectoriesPart of the deaf auditory cortex shows preference for in-depth visual motionDeafness might lead to computational reallocation between auditory/visual regions.


Sign in / Sign up

Export Citation Format

Share Document