Neural Representation of Faces in Human Visual Cortex: the Roles of Attention, Emotion, and Viewpoint

Author(s):  
Patrik Vuilleumier
2019 ◽  
Author(s):  
R.S. van Bergen ◽  
J.F.M. Jehee

AbstractHow does the brain represent the reliability of its sensory evidence? Here, we test whether sensory uncertainty is encoded in cortical population activity as the width of a probability distribution – a hypothesis that lies at the heart of Bayesian models of neural coding. We probe the neural representation of uncertainty by capitalizing on a well-known behavioral bias called serial dependence. Human observers of either sex reported the orientation of stimuli presented in sequence, while activity in visual cortex was measured with fMRI. We decoded probability distributions from population-level activity and found that serial dependence effects in behavior are consistent with a statistically advantageous sensory integration strategy, in which uncertain sensory information is given less weight. More fundamentally, our results suggest that probability distributions decoded from human visual cortex reflect the sensory uncertainty that observers rely on in their decisions, providing critical evidence for Bayesian theories of perception.


2001 ◽  
Vol 86 (3) ◽  
pp. 1398-1411 ◽  
Author(s):  
Sabine Kastner ◽  
Peter De Weerd ◽  
Mark A. Pinsk ◽  
M. Idette Elizondo ◽  
Robert Desimone ◽  
...  

Neurophysiological studies in monkeys show that when multiple visual stimuli appear simultaneously in the visual field, they are not processed independently, but rather interact in a mutually suppressive way. This suggests that multiple stimuli compete for neural representation. Consistent with this notion, we have previously found in humans that functional magnetic resonance imaging (fMRI) signals in V1 and ventral extrastriate areas V2, V4, and TEO are smaller for simultaneously presented (i.e., competing) stimuli than for the same stimuli presented sequentially (i.e., not competing). Here we report that suppressive interactions between stimuli are also present in dorsal extrastriate areas V3A and MT, and we compare these interactions to those in areas V1 through TEO. To exclude the possibility that the differences in responses to simultaneously and sequentially presented stimuli were due to differences in the number of transient onsets, we tested for suppressive interactions in area V4, in an experiment that held constant the number of transient onsets. We found that the fMRI response to a stimulus in the upper visual field was suppressed by the presence of nearby stimuli in the lower visual field. Further, we excluded the possibility that the greater fMRI responses to sequential compared with simultaneous presentations were due to exogeneous attentional cueing by having our subjects count T's or L's at fixation, an attentionally demanding task. Behavioral testing demonstrated that neither condition interfered with performance of the T/L task. Our previous findings suggested that suppressive interactions among nearby stimuli in areas V1 through TEO were scaled to the receptive field (RF) sizes of neurons in those areas. Here we tested this idea by parametrically varying the spatial separation among stimuli in the display. Display sizes ranged from 2 × 2° to 7 × 7° and were centered at 5.5° eccentricity. Based on the effects of display size on the magnitude of suppressive interactions, we estimated that RF sizes at an eccentricity of 5.5° were <2° in V1, 2–4° in V2, 4–6° in V4, larger than 7° (but still confined to a quadrant) in TEO, and larger than 6° (confined to a quadrant) in V3A. These estimates of RF sizes in human visual cortex are strikingly similar to those measured in physiological mapping studies in the homologous visual areas in monkeys.


2014 ◽  
Vol 14 (10) ◽  
pp. 880-880
Author(s):  
R. Wang ◽  
Y. Xu

2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Dirk B Walther

Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scenes-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions.


2008 ◽  
Vol 39 (01) ◽  
Author(s):  
M Trenner ◽  
R Schubert ◽  
HR Heekeren ◽  
M Fahle

i-Perception ◽  
10.1068/if668 ◽  
2012 ◽  
Vol 3 (9) ◽  
pp. 668-668 ◽  
Author(s):  
Yuko Hara ◽  
Justin L Gardner

Sign in / Sign up

Export Citation Format

Share Document