Tactile and visual motion direction processing in hMT+/V5

NeuroImage ◽  
2014 ◽  
Vol 84 ◽  
pp. 420-427 ◽  
Author(s):  
Bianca M. van Kemenade ◽  
Kiley Seymour ◽  
Evelin Wacker ◽  
Bernhard Spitzer ◽  
Felix Blankenburg ◽  
...  
2020 ◽  
Author(s):  
Nardin Nakhla ◽  
Yavar Korkian ◽  
Matthew R. Krause ◽  
Christopher C. Pack

AbstractThe processing of visual motion is carried out by dedicated pathways in the primate brain. These pathways originate with populations of direction-selective neurons in the primary visual cortex, which project to dorsal structures like the middle temporal (MT) and medial superior temporal (MST) areas. Anatomical and imaging studies have suggested that area V3A might also be specialized for motion processing, but there have been very few studies of single-neuron direction selectivity in this area. We have therefore performed electrophysiological recordings from V3A neurons in two macaque monkeys (one male and one female) and measured responses to a large battery of motion stimuli that includes translation motion, as well as more complex optic flow patterns. For comparison, we simultaneously recorded the responses of MT neurons to the same stimuli. Surprisingly, we find that overall levels of direction selectivity are similar in V3A and MT and moreover that the population of V3A neurons exhibits somewhat greater selectivity for optic flow patterns. These results suggest that V3A should be considered as part of the motion processing machinery of the visual cortex, in both human and non-human primates.Significance statementAlthough area V3A is frequently the target of anatomy and imaging studies, little is known about its functional role in processing visual stimuli. Its contribution to motion processing has been particularly unclear, with different studies yielding different conclusions. We report a detailed study of direction selectivity in V3A. Our results show that single V3A neurons are, on average, as capable of representing motion direction as are neurons in well-known structures like MT. Moreover, we identify a possible specialization for V3A neurons in representing complex optic flow, which has previously been thought to emerge in higher-order brain regions. Thus it appears that V3A is well-suited to a functional role in motion processing.


Cortex ◽  
2019 ◽  
Vol 119 ◽  
pp. 511-518
Author(s):  
Joost Heutink ◽  
Gera de Haan ◽  
Jan-Bernard Marsman ◽  
Mart van Dijk ◽  
Christina Cordes

2008 ◽  
Vol 99 (5) ◽  
pp. 2558-2576
Author(s):  
Mario Ruiz-Ruiz ◽  
Julio C. Martinez-Trujillo

Previous studies have demonstrated that human subjects update the location of visual targets for saccades after head and body movements and in the absence of visual feedback. This phenomenon is known as spatial updating. Here we investigated whether a similar mechanism exists for the perception of motion direction. We recorded eye positions in three dimensions and behavioral responses in seven subjects during a motion task in two different conditions: when the subject's head remained stationary and when subjects rotated their heads around an anteroposterior axis (head tilt). We demonstrated that after head-tilt subjects updated the direction of saccades made in the perceived stimulus direction (direction of motion updating), the amount of updating varied across subjects and stimulus directions, the amount of motion direction updating was highly correlated with the amount of spatial updating during a memory-guided saccade task, subjects updated the stimulus direction during a two-alternative forced-choice direction discrimination task in the absence of saccadic eye movements (perceptual updating), perceptual updating was more accurate than motion direction updating involving saccades, and subjects updated motion direction similarly during active and passive head rotation. These results demonstrate the existence of an updating mechanism for the perception of motion direction in the human brain that operates during active and passive head rotations and that resembles the one of spatial updating. Such a mechanism operates during different tasks involving different motor and perceptual skills (saccade and motion direction discrimination) with different degrees of accuracy.


2020 ◽  
Vol 6 (1) ◽  
pp. 335-362
Author(s):  
Tatiana Pasternak ◽  
Duje Tadin

Psychophysical and neurophysiological studies of responses to visual motion have converged on a consistent set of general principles that characterize visual processing of motion information. Both types of approaches have shown that the direction and speed of target motion are among the most important encoded stimulus properties, revealing many parallels between psychophysical and physiological responses to motion. Motivated by these parallels, this review focuses largely on more direct links between the key feature of the neuronal response to motion, direction selectivity, and its utilization in memory-guided perceptual decisions. These links were established during neuronal recordings in monkeys performing direction discriminations, but also by examining perceptual effects of widespread elimination of cortical direction selectivity produced by motion deprivation during development. Other approaches, such as microstimulation and lesions, have documented the importance of direction-selective activity in the areas that are active during memory-guided direction comparisons, area MT and the prefrontal cortex, revealing their likely interactions during behavioral tasks.


i-Perception ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 204166952093732
Author(s):  
Masahiko Terao ◽  
Shin’ya Nishida

Many studies have investigated various effects of smooth pursuit on visual motion processing, especially the effects related to the additional retinal shifts produced by eye movement. In this article, we show that the perception of apparent motion during smooth pursuit is determined by the interelement proximity in retinal coordinates and also by the proximity in objective world coordinates. In Experiment 1, we investigated the perceived direction of the two-frame apparent motion of a square-wave grating with various displacement sizes under fixation and pursuit viewing conditions. The retinal and objective displacements between the two frames agreed with each other under the fixation condition. However, the displacements differed by 180 degrees in terms of phase shift, under the pursuit condition. The proportions of the reported motion direction between the two viewing conditions did not coincide when they were plotted as a function of either the retinal displacement or of the objective displacement; however, they did coincide when plotted as a function of a mixture of the two. The result from Experiment 2 showed that the perceived jump size of the apparent motion was also dependent on both retinal and objective displacements. Our findings suggest that the detection of the apparent motion during smooth pursuit considers the retinal proximity and also the objective proximity. This mechanism may assist with the selection of a motion path that is more likely to occur in the real world and, therefore, be useful for ensuring perceptual stability during smooth pursuit.


2001 ◽  
Vol 13 (6) ◽  
pp. 1243-1253 ◽  
Author(s):  
Rajesh P. N. Rao ◽  
David M. Eagleman ◽  
Terrence J. Sejnowski

When a flash is aligned with a moving object, subjects perceive the flash to lag behind the moving object. Two different models have been proposed to explain this “flash-lag” effect. In the motion extrapolation model, the visual system extrapolates the location of the moving object to counteract neural propagation delays, whereas in the latency difference model, it is hypothesized that moving objects are processed and perceived more quickly than flashed objects. However, recent psychophysical experiments suggest that neither of these interpretations is feasible (Eagleman & Sejnowski, 2000a, 2000b, 2000c), hypothesizing instead that the visual system uses data from the future of an event before committing to an interpretation. We formalize this idea in terms of the statistical framework of optimal smoothing and show that a model based on smoothing accounts for the shape of psychometric curves from a flash-lag experiment involving random reversals of motion direction. The smoothing model demonstrates how the visual system may enhance perceptual accuracy by relying not only on data from the past but also on data collected from the immediate future of an event.


Science ◽  
2015 ◽  
Vol 349 (6243) ◽  
pp. 70-74 ◽  
Author(s):  
Adrian Wertz ◽  
Stuart Trenholm ◽  
Keisuke Yonehara ◽  
Daniel Hillier ◽  
Zoltan Raics ◽  
...  

Individual cortical neurons can selectively respond to specific environmental features, such as visual motion or faces. How this relates to the selectivity of the presynaptic network across cortical layers remains unclear. We used single-cell–initiated, monosynaptically restricted retrograde transsynaptic tracing with rabies viruses expressing GCaMP6s to image, in vivo, the visual motion–evoked activity of individual layer 2/3 pyramidal neurons and their presynaptic networks across layers in mouse primary visual cortex. Neurons within each layer exhibited similar motion direction preferences, forming layer-specific functional modules. In one-third of the networks, the layer modules were locked to the direction preference of the postsynaptic neuron, whereas for other networks the direction preference varied by layer. Thus, there exist feature-locked and feature-variant cortical networks.


2014 ◽  
Vol 26 (11) ◽  
pp. 2652-2668 ◽  
Author(s):  
Florian Raudies ◽  
Rick O. Gilmore

Visual motion direction ambiguities due to edge-aperture interaction might be resolved by speed priors, but scant empirical data support this hypothesis. We measured optic flow and gaze positions of walking mothers and the infants they carried. Empirically derived motion priors for infants are vertically elongated and shifted upward relative to mothers. Skewed normal distributions fitted to estimated retinal speeds peak at values above 20[Formula: see text]/sec.


2019 ◽  
Vol 30 (4) ◽  
pp. 2659-2673
Author(s):  
Shaun L Cloherty ◽  
Jacob L Yates ◽  
Dina Graf ◽  
Gregory C DeAngelis ◽  
Jude F Mitchell

Abstract Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance with human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed versus accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows; however, marmosets had substantially less nondecision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.


Sign in / Sign up

Export Citation Format

Share Document