scholarly journals MEG activity in visual areas of the human brain during target selection and sustained attention

2010 ◽  
Vol 10 (7) ◽  
pp. 94-94
Author(s):  
J. Martinez-Trujillo ◽  
T. Lennert ◽  
R. Cipriani ◽  
P. Jolicoeur ◽  
D. Cheyne
2019 ◽  
Vol 39 (49) ◽  
pp. 9797-9805 ◽  
Author(s):  
Malte Wöstmann ◽  
Mohsen Alavash ◽  
Jonas Obleser

NeuroImage ◽  
2006 ◽  
Vol 29 (1) ◽  
pp. 74-89 ◽  
Author(s):  
Peter Stiers ◽  
Ronald Peeters ◽  
Lieven Lagae ◽  
Paul Van Hecke ◽  
Stefan Sunaert
Keyword(s):  

Author(s):  
QI ZHANG ◽  
KEN MOGI

Human ability to process visual information of outside world is yet far ahead of man-made systems in accuracy and speed. In particular, human beings can perceive 3-D object from various cues, such as binocular disparity and monocular shading cues. Understanding of the mechanism of human visual processing will lead to a breakthrough in creating artificial visual systems. Here, we study the human 3-D volumetric object perception that is induced by a visual phenomenon named as the pantomime effect and by the monocular shading cues. We measured human brain activities using fMRI when the subjects were observing the visual stimuli. A coordinated system of brain areas, including those in the prefrontal and parietal cortex, in addition to the occipital visual areas was found to be involved in the volumetric object perception.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chi Zhang ◽  
Xiao-Han Duan ◽  
Lin-Yuan Wang ◽  
Yong-Li Li ◽  
Bin Yan ◽  
...  

Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN—AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.


1997 ◽  
Vol 78 (3) ◽  
pp. 1433-1446 ◽  
Author(s):  
Vincent P. Ferrera ◽  
Stephen G. Lisberger

Ferrera, Vincent P. and Stephen G. Lisberger. Neuronal responses in visual areas MT and MST during smooth pursuit target selection. J. Neurophysiol. 78: 1433–1446, 1997. We recorded the activity of single neurons in the middle temporal (MT) and middle superior temporal (MST) visual areas in two macaque monkeys while the animals performed a smooth pursuit target selection task. The monkeys were presented with two moving stimuli of different colors and were trained to initiate smooth pursuit to the stimulus that matched the color of a previously given cue. We designed these experiments so that we could separate the component of the neuronal response that was driven by the visual stimulus from an extraretinal component that predicted the color or direction of the selected target. We found that for all cells in MT and MST the response was primarily determined by the visual stimulus. However, 14% (8 of 58) of MT neurons and 26% (22 of 84) of MST neurons had a small predictive component that was significant at the P ≤ 0.05 level. In some cells, the predictive component was clearly related to the color of the intended target, but more often it was correlated with the direction of the target. We have previously documented a systematic shift in the latency of smooth pursuit that depends on the relative direction of motion of the two stimuli. We found that neither the latency nor the amplitude of neuronal responses in MT or MST was correlated with behavioral latency. These results are consistent with a model for target selection in which a weak selection bias for the intended target is amplified by a competitive network that suppresses motion signals related to the nonintended stimulus. It is possible that the predictive component of neuronal responses in MT and MST contributes to the selection bias. However, the strength of the selection bias in MT and MST is not sufficient to account for the high degree of selectivity shown by pursuit behavior.


2004 ◽  
Vol 92 (3) ◽  
pp. 1880-1891 ◽  
Author(s):  
Peter Neri ◽  
Holly Bridge ◽  
David J. Heeger

Stereoscopic vision relies mainly on relative depth differences between objects rather than on their absolute distance in depth from where the eyes fixate. However, relative disparities are computed from absolute disparities, and it is not known where these two stages are represented in the human brain. Using functional MRI (fMRI), we assessed absolute and relative disparity selectivity with stereoscopic stimuli consisting of pairs of transparent planes in depth in which the absolute and relative disparity signals could be independently manipulated (at a local spatial scale). In experiment 1, relative disparity was kept constant, while absolute disparity was varied in one-half the blocks of trials (“mixed” blocks) and kept constant in the remaining one-half (“same” blocks), alternating between blocks. Because neuronal responses undergo adaptation and reduce their firing rate following repeated presentation of an effective stimulus, the fMRI signal reflecting activity of units selective for absolute disparity is expected to be smaller during “same” blocks as compared with “mixed” ones. Experiment 2 similarly manipulated relative disparity rather than absolute disparity. The results from both experiments were consistent with adaptation with differential effects across visual areas such that 1) dorsal areas (V3A, MT+/V5, V7) showed more adaptation to absolute than to relative disparity; 2) ventral areas (hV4, V8/V4α) showed an equal adaptation to both; and 3) early visual areas (V1, V2, V3) showed a small effect in both experiments. These results indicate that processing in dorsal areas may rely mostly on information about absolute disparities, while ventral areas split neural resources between the two types of stereoscopic information so as to maintain an important representation of relative disparity.


1994 ◽  
Vol 72 (3) ◽  
pp. 1420-1424 ◽  
Author(s):  
P. Dupont ◽  
G. A. Orban ◽  
B. De Bruyn ◽  
A. Verbruggen ◽  
L. Mortelmans

1. The regions of the human brain responsive to motion were mapped using the H2(15)O position emission tomography (PET) activation technique and compared by viewing a moving random dot pattern with a stationary dot pattern. The stimulus was optimized in dot density and 3 degrees in diameter. 2. In addition to bilateral foci at the border between Brodmann areas 19 and 37, a V1/V2 focus and a focus in the cuneus reported earlier, we observed activations in other visual areas (lower BA 19 and the parieto-occipital fissure) in the cerebellum and in two other, presumed vestibular areas, the posterior bank of lateral sulcus and at the border of BA 2/40. 3. Homologies between monkey and human cortex are discussed.


2021 ◽  
Author(s):  
Shinji Nishimoto

SummaryIn this paper, the process of building a model for predicting human brain activity under video viewing conditions was described as a part of an entry into the Algonauts Project 2021 Challenge. The model was designed to predict brain activity measured using functional MRI (fMRI) by weighted linear summations of the spatiotemporal visual features that appear in the video stimuli (video features). Two types of video features were used: (1) motion-energy features designed based on neurophysiological findings, and (2) features derived from a space-time vision transformer (TimeSformer). To utilize the features of various video domains, the features of the TimeSformer models pre-trained using several different movie sets were combined. Through these model building and validation processes, results showed that there is a certain correspondence between the hierarchical representation of the TimeSformer model and the hierarchical representation of the visual system in the brain. The motion-energy features are effective in predicting brain activity in the early visual areas, while TimeSformer-derived features are effective in higher-order visual areas, and a hybrid model that uses motion energy and TimeSformer features is effective for predicting whole brain activity.


2018 ◽  
Vol 39 (10) ◽  
pp. 3813-3826 ◽  
Author(s):  
Bruce D. Keefe ◽  
André D. Gouws ◽  
Aislin A. Sheldon ◽  
Richard J. W. Vernon ◽  
Samuel J. D. Lawrence ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document