scholarly journals White matter deficits correlate with visual motion perception impairments in dyslexic carriers of the DCDC2 genetic risk variant

Author(s):  
Daniela Perani ◽  
Paola Scifo ◽  
Guido M. Cicchini ◽  
Pasquale Della Rosa ◽  
Chiara Banfi ◽  
...  

AbstractMotion perception deficits in dyslexia show a large intersubjective variability, partly reflecting genetic factors influencing brain architecture development. In previous work, we have demonstrated that dyslexic carriers of a mutation of the DCDC2 gene have a very strong impairment in motion perception. In the present study, we investigated structural white matter alterations associated with the poor motion perception in a cohort of twenty dyslexics with a subgroup carrying the DCDC2 gene deletion (DCDC2d+) and a subgroup without the risk variant (DCDC2d–). We observed significant deficits in motion contrast sensitivity and in motion direction discrimination accuracy at high contrast, stronger in the DCDC2d+ group. Both motion perception impairments correlated significantly with the fractional anisotropy in posterior ventral and dorsal tracts, including early visual pathways both along the optic radiation and in proximity of occipital cortex, MT and VWFA. However, the DCDC2d+ group showed stronger correlations between FA and motion perception impairments than the DCDC2d– group in early visual white matter bundles, including the optic radiations, and in ventral pathways located in the left inferior temporal cortex. Our results suggest that the DCDC2d+ group experiences higher vulnerability in visual motion processing even at early stages of visual analysis, which might represent a specific feature associated with the genotype and provide further neurobiological support to the visual-motion deficit account of dyslexia in a specific subpopulation.

2019 ◽  
Vol 30 (4) ◽  
pp. 2659-2673
Author(s):  
Shaun L Cloherty ◽  
Jacob L Yates ◽  
Dina Graf ◽  
Gregory C DeAngelis ◽  
Jude F Mitchell

Abstract Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance with human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed versus accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows; however, marmosets had substantially less nondecision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.


2012 ◽  
Vol 25 (0) ◽  
pp. 140
Author(s):  
Lore Thaler ◽  
Jennifer Milne ◽  
Stephen R. Arnott ◽  
Melvyn A. Goodale

People can echolocate their distal environment by making mouth-clicks and listening to the click-echoes. In previous work that used functional magnetic resonance imaging (fMRI) we have shown that the processing of echolocation motion increases activity in posterior/inferior temporal cortex (Thaler et al., 2011). In the current study we investigated, if brain areas that are sensitive to echolocation motion in blind echolocation experts correspond to visual motion area MT+. To this end we used fMRI to measure brain activity of two early blind echolocation experts while they listened to recordings of echolocation and auditory source sounds that could be either moving or stationary, and that could be located either to the left or to the right of the listener. A whole brain analysis revealed that echo motion and source motion activated different brain areas in posterior/inferior temporal cortex. Furthermore, the relative spatial arrangement of echo and source motion areas appeared to match the relative spatial arrangement of area MT+ and source motion areas that has been reported for sighted people (Saenz et al., 2008). Furthermore, we found that brain areas that were sensitive to echolocation motion showed a larger response to echo motion presented in contra-lateral space, a response pattern typical for visual motion processing in area MT+. In their entirety the data are consistent with the idea that brain areas that process echolocation motion in blind echolocation experts correspond to area MT+.


2019 ◽  
Author(s):  
Shaun L. Cloherty ◽  
Jacob L. Yates ◽  
Dina Graf ◽  
Gregory C. DeAngelis ◽  
Jude F. Mitchell

AbstractVisual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance to human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed vs. accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows, however, marmosets had substantially less non-decision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.


i-Perception ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 204166952093732
Author(s):  
Masahiko Terao ◽  
Shin’ya Nishida

Many studies have investigated various effects of smooth pursuit on visual motion processing, especially the effects related to the additional retinal shifts produced by eye movement. In this article, we show that the perception of apparent motion during smooth pursuit is determined by the interelement proximity in retinal coordinates and also by the proximity in objective world coordinates. In Experiment 1, we investigated the perceived direction of the two-frame apparent motion of a square-wave grating with various displacement sizes under fixation and pursuit viewing conditions. The retinal and objective displacements between the two frames agreed with each other under the fixation condition. However, the displacements differed by 180 degrees in terms of phase shift, under the pursuit condition. The proportions of the reported motion direction between the two viewing conditions did not coincide when they were plotted as a function of either the retinal displacement or of the objective displacement; however, they did coincide when plotted as a function of a mixture of the two. The result from Experiment 2 showed that the perceived jump size of the apparent motion was also dependent on both retinal and objective displacements. Our findings suggest that the detection of the apparent motion during smooth pursuit considers the retinal proximity and also the objective proximity. This mechanism may assist with the selection of a motion path that is more likely to occur in the real world and, therefore, be useful for ensuring perceptual stability during smooth pursuit.


2021 ◽  
Author(s):  
Scott T. Steinmetz ◽  
Oliver W. Layton ◽  
Nate V. Powell ◽  
Brett Fajen

This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253067
Author(s):  
Benedict Wild ◽  
Stefan Treue

Modern accounts of visual motion processing in the primate brain emphasize a hierarchy of different regions within the dorsal visual pathway, especially primary visual cortex (V1) and the middle temporal area (MT). However, recent studies have called the idea of a processing pipeline with fixed contributions to motion perception from each area into doubt. Instead, the role that each area plays appears to depend on properties of the stimulus as well as perceptual history. We propose to test this hypothesis in human subjects by comparing motion perception of two commonly used stimulus types: drifting sinusoidal gratings (DSGs) and random dot patterns (RDPs). To avoid potential biases in our approach we are pre-registering our study. We will compare the effects of size and contrast levels on the perception of the direction of motion for DSGs and RDPs. In addition, based on intriguing results in a pilot study, we will also explore the effects of a post-stimulus mask. Our approach will offer valuable insights into how motion is processed by the visual system and guide further behavioral and neurophysiological research.


2018 ◽  
Vol 4 (1) ◽  
pp. 501-523 ◽  
Author(s):  
Shin'ya Nishida ◽  
Takahiro Kawabe ◽  
Masataka Sawayama ◽  
Taiki Fukiage

Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.


2020 ◽  
Vol 117 (50) ◽  
pp. 32165-32168
Author(s):  
Arvid Guterstam ◽  
Michael S. A. Graziano

Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.


2008 ◽  
Vol 276 (1655) ◽  
pp. 263-268 ◽  
Author(s):  
William Curran ◽  
Colin W.G Clifford ◽  
Christopher P Benton

It is well known that context influences our perception of visual motion direction. For example, spatial and temporal context manipulations can be used to induce two well-known motion illusions: direction repulsion and the direction after-effect (DAE). Both result in inaccurate perception of direction when a moving pattern is either superimposed on (direction repulsion), or presented following adaptation to (DAE), another pattern moving in a different direction. Remarkable similarities in tuning characteristics suggest that common processes underlie the two illusions. What is not clear, however, is whether the processes driving the two illusions are expressions of the same or different neural substrates. Here we report two experiments demonstrating that direction repulsion and the DAE are, in fact, expressions of different neural substrates. Our strategy was to use each of the illusions to create a distorted perceptual representation upon which the mechanisms generating the other illusion could potentially operate. We found that the processes mediating direction repulsion did indeed access the distorted perceptual representation induced by the DAE. Conversely, the DAE was unaffected by direction repulsion. Thus parallels in perceptual phenomenology do not necessarily imply common neural substrates. Our results also demonstrate that the neural processes driving the DAE occur at an earlier stage of motion processing than those underlying direction repulsion.


2021 ◽  
Author(s):  
Merve Kiniklioglu ◽  
Huseyin Boyaci

Here we investigate how the extent of spatial attention affects center-surround interaction in visual motion processing. To do so, we measured motion direction discrimination thresholds in humans using drifting gratings and two attention conditions. Under the narrow attention condition, attention was limited to the central part of the visual stimulus, whereas under the wide attention condition, it was directed to both the center and surround of the stimulus. We found stronger surround suppression under the wide attention condition. The magnitude of the attention effect increased with the size of the surround when the stimulus had low contrast, but did not change when it had high contrast. Results also showed that attention had a weaker effect when the center and surround gratings drifted in opposite directions. Next, to establish a link between the behavioral results and the neuronal response characteristics, we performed computer simulations using the divisive normalization model. Our simulations showed that the model can successfully predict the observed behavioral results using parameters derived from the medial temporal (MT) area of the cortex. These findings reveal the critical role of spatial attention on surround suppression and establish a link between neuronal activity and behavior. Further, these results also suggest that the reduced surround suppression found in certain clinical disorders (e.g., schizophrenia and autism spectrum disorder) may be caused by abnormal attention mechanisms.


Sign in / Sign up

Export Citation Format

Share Document