scholarly journals A role for terminators in motion processing by macaque MT neurons?

2010 ◽  
Vol 2 (7) ◽  
pp. 415-415 ◽  
Author(s):  
N. Majaj ◽  
M. A. Smith ◽  
A. Kohn ◽  
W. Bair ◽  
J. A. Movshon
Keyword(s):  

2018 ◽  
Vol 120 (5) ◽  
pp. 2396-2409 ◽  
Author(s):  
Bryan M. Krause ◽  
Geoffrey M. Ghose

Many models of perceptually based decisions postulate that actions are initiated when accumulated sensory signals reach a threshold level of activity. These models have received considerable neurophysiological support from recordings of individual neurons while animals are engaged in motion discrimination tasks. These experiments have found that the activity of neurons in a particular visual area strongly associated with motion processing (MT), when pooled over hundreds of milliseconds, is sufficient to explain behavioral timing and performance. However, this level of pooling may be problematic for urgent perceptual decisions in which rapid detection dictates temporally precise integration. In this paper, we explore the physiological basis of one such task in which macaques detected brief (~70 ms) transients of coherent motion within ~240 ms. We find that a simple linear summation model based on realistic stimulus responses of as few as 40 correlated neurons can predict the reliability and timing of rapid motion detection. The model naturally reproduces a distinctive physiological relationship observed in rapid detection tasks in which the individual neurons with the most reliable stimulus responses are also the most predictive of impending behavioral choices. Remarkably, we observed this relationship across our simulated neuronal populations even when all neurons within the pool were weighted equally with respect to readout. These results demonstrate that small numbers of reliable sensory neurons can dominate perceptual judgments without any explicit reliability based weighting and are sufficient to explain the accuracy, latency, and temporal precision of rapid detection. NEW & NOTEWORTHY Computational and psychophysical models suggest that performance in many perceptual tasks may be based on the preferential sampling of reliable neurons. Recent studies of MT neurons during rapid motion detection, in which only those neurons with the most reliable sensory responses were strongly predictive of the animals’ decisions, seemingly support this notion. Here we show that a simple threshold model without explicit reliability biases can explain both the behavioral accuracy and precision of these detections and the distribution of sensory- and choice-related signals across neurons.



2019 ◽  
Vol 13 ◽  
Author(s):  
Parvin Zarei Eskikand ◽  
Tatiana Kameneva ◽  
Anthony N. Burkitt ◽  
David B. Grayden ◽  
Michael R. Ibbotson


2013 ◽  
Vol 110 (1) ◽  
pp. 63-74 ◽  
Author(s):  
Farhan A. Khawaja ◽  
Liu D. Liu ◽  
Christopher C. Pack

The estimation of motion information from retinal input is a fundamental function of the primate dorsal visual pathway. Previous work has shown that this function involves multiple cortical areas, with each area integrating information from its predecessors. Compared with neurons in the primary visual cortex (V1), neurons in the middle temporal (MT) area more faithfully represent the velocity of plaid stimuli, and the observation of this pattern selectivity has led to two-stage models in which MT neurons integrate the outputs of component-selective V1 neurons. Motion integration in these models is generally complemented by motion opponency, which refines velocity selectivity. Area MT projects to a third stage of motion processing, the medial superior temporal (MST) area, but surprisingly little is known about MST responses to plaid stimuli. Here we show that increased pattern selectivity in MST is associated with greater prevalence of the mechanisms implemented by two-stage MT models: Compared with MT neurons, MST neurons integrate motion components to a greater degree and exhibit evidence of stronger motion opponency. Moreover, when tested with more challenging unikinetic plaid stimuli, an appreciable percentage of MST neurons are pattern selective, while such selectivity is rare in MT. Surprisingly, increased motion integration is found in MST even for transparent plaid stimuli, which are not typically integrated perceptually. Thus the relationship between MST and MT is qualitatively similar to that between MT and V1, as repeated application of basic motion mechanisms leads to novel selectivities at each stage along the pathway.



2011 ◽  
Vol 23 (6) ◽  
pp. 1533-1548 ◽  
Author(s):  
Jeannette A. M. Lorteije ◽  
Nick E. Barraclough ◽  
Tjeerd Jellema ◽  
Mathijs Raemaekers ◽  
Jacob Duijnhouwer ◽  
...  

To investigate form-related activity in motion-sensitive cortical areas, we recorded cell responses to animate implied motion in macaque middle temporal (MT) and medial superior temporal (MST) cortex and investigated these areas using fMRI in humans. In the single-cell studies, we compared responses with static images of human or monkey figures walking or running left or right with responses to the same human and monkey figures standing or sitting still. We also investigated whether the view of the animate figure (facing left or right) that elicited the highest response was correlated with the preferred direction for moving random dot patterns. First, figures were presented inside the cell's receptive field. Subsequently, figures were presented at the fovea while a dynamic noise pattern was presented at the cell's receptive field location. The results show that MT neurons did not discriminate between figures on the basis of the implied motion content. Instead, response preferences for implied motion correlated with preferences for low-level visual features such as orientation and size. No correlation was found between the preferred view of figures implying motion and the preferred direction for moving random dot patterns. Similar findings were obtained in a smaller population of MST cortical neurons. Testing human MT+ responses with fMRI further corroborated the notion that low-level stimulus features might explain implied motion activation in human MT+. Together, these results suggest that prior human imaging studies demonstrating animate implied motion processing in area MT+ can be best explained by sensitivity for low-level features rather than sensitivity for the motion implied by animate figures.



2011 ◽  
Vol 105 (1) ◽  
pp. 200-208 ◽  
Author(s):  
Finnegan J. Calabro ◽  
Lucia M. Vaina

Segmentation of the visual scene into relevant object components is a fundamental process for successfully interacting with our surroundings. Many visual cues, including motion and binocular disparity, support segmentation, yet the mechanisms using these cues are unclear. We used a psychophysical motion discrimination task in which noise dots were displaced in depth to investigate the role of segmentation through disparity cues in visual motion stimuli ( experiment 1). We found a subtle, but significant, bias indicating that near disparity noise disrupted the segmentation of motion more than equidistant far disparity noise. A control experiment showed that the near-far difference could not be attributed to attention ( experiment 2). To account for the near-far bias, we constructed a biologically constrained model using recordings from neurons in the middle temporal area (MT) to simulate human observers' performance on experiment 1. Performance of the model of MT neurons showed a near-disparity skew similar to that shown by human observers. To isolate the cause of the skew, we simulated performance of a model containing units derived from properties of MT neurons, using phase-modulated Gabor disparity tuning. Using a skewed-normal population distribution of preferred disparities, the model reproduced the elevated motion discrimination thresholds for near-disparity noise, whereas a skewed-normal population of phases (creating individually asymmetric units) did not lead to any performance skew. Results from the model suggest that the properties of neurons in area MT are computationally sufficient to perform disparity segmentation during motion processing and produce similar disparity biases as those produced by human observers.



2020 ◽  
Vol 38 (5) ◽  
pp. 395-405
Author(s):  
Luca Battaglini ◽  
Federica Mena ◽  
Clara Casco

Background: To study motion perception, a stimulus consisting of a field of small, moving dots is often used. Generally, some of the dots coherently move in the same direction (signal) while the rest move randomly (noise). A percept of global coherent motion (CM) results when many different local motion signals are combined. CM computation is a complex process that requires the integrity of the middle-temporal area (MT/V5) and there is evidence that increasing the number of dots presented in the stimulus makes such computation more efficient. Objective: In this study, we explored whether anodal direct current stimulation (tDCS) over MT/V5 would increase individual performance in a CM task at a low signal-to-noise ratio (SNR, i.e. low percentage of coherent dots) and with a target consisting of a large number of moving dots (high dot numerosity, e.g. >250 dots) with respect to low dot numerosity (<60 dots), indicating that tDCS favour the integration of local motion signal into a single global percept (global motion). Method: Participants were asked to perform a CM detection task (two-interval forced-choice, 2IFC) while they received anodal, cathodal, or sham stimulation on three different days. Results: Our findings showed no effect of cathodal tDCS with respect to the sham condition. Instead, anodal tDCS improves performance, but mostly when dot numerosity is high (>400 dots) to promote efficient global motion processing. Conclusions: The present study suggests that tDCS may be used under appropriate stimulus conditions (low SNR and high dot numerosity) to boost the global motion processing efficiency, and may be useful to empower clinical protocols to treat visual deficits.



2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.



Sign in / Sign up

Export Citation Format

Share Document