scholarly journals The time course of visual information extraction for identifying and categorizing same and other-race faces in Caucasian observers

2014 ◽  
Vol 14 (10) ◽  
pp. 1279-1279
Author(s):  
S. Lafortune ◽  
C. Blais ◽  
K. Robinson ◽  
J. Royer ◽  
J. Duncan ◽  
...  
1999 ◽  
Vol 81 (5) ◽  
pp. 2558-2569 ◽  
Author(s):  
Pamela Reinagel ◽  
Dwayne Godwin ◽  
S. Murray Sherman ◽  
Christof Koch

Encoding of visual information by LGN bursts. Thalamic relay cells respond to visual stimuli either in burst mode, as a result of activation of a low-threshold Ca2+ conductance, or in tonic mode, when this conductance is inactive. We investigated the role of these two response modes for the encoding of the time course of dynamic visual stimuli, based on extracellular recordings of 35 relay cells from the lateral geniculate nucleus of anesthetized cats. We presented a spatially optimized visual stimulus whose contrast fluctuated randomly in time with frequencies of up to 32 Hz. We estimated the visual information in the neural responses using a linear stimulus reconstruction method. Both burst and tonic spikes carried information about stimulus contrast, exceeding one bit per action potential for the highest variance stimuli. The “meaning” of an action potential, i.e., the optimal estimate of the stimulus at times preceding a spike, was similar for burst and tonic spikes. In within-trial comparisons, tonic spikes carried about twice as much information per action potential as bursts, but bursts as unitary events encoded about three times more information per event than tonic spikes. The coding efficiency of a neuron for a particular stimulus is defined as the fraction of the neural coding capacity that carries stimulus information. Based on a lower bound estimate of coding efficiency, bursts had ∼1.5-fold higher efficiency than tonic spikes, or 3-fold if bursts were considered unitary events. Our main conclusion is that both bursts and tonic spikes encode stimulus information efficiently, which rules out the hypothesis that bursts are nonvisual responses.


PLoS ONE ◽  
2020 ◽  
Vol 15 (9) ◽  
pp. e0239305
Author(s):  
Isabelle Charbonneau ◽  
Karolann Robinson ◽  
Caroline Blais ◽  
Daniel Fiset

2012 ◽  
Vol 24 (7) ◽  
pp. 1645-1655 ◽  
Author(s):  
Sylvain Madec ◽  
Arnaud Rey ◽  
Stéphane Dufau ◽  
Michael Klein ◽  
Jonathan Grainger

We describe a novel method for tracking the time course of visual identification processes, here applied to the specific case of letter perception. We combine a new behavioral measure of letter identification times with single-letter ERP recordings. Letter identification processes are considered to take place in those time windows in which the behavioral measure and ERPs are correlated. A first significant correlation was found at occipital electrode sites around 100 msec poststimulus onset that most likely reflects the contribution of low-level feature processing to letter identification. It was followed by a significant correlation at fronto-central sites around 170 msec, which we take to reflect letter-specific identification processes, including retrieval of a phonological code corresponding to the letter name. Finally, significant correlations were obtained around 220 msec at occipital electrode sites that may well be due to the kind of recurrent processing that has been revealed recently by TMS studies. Overall, these results suggest that visual identification processes are likely to be composed of a first (and probably preconscious) burst of visual information processing followed by a second reentrant processing on visual areas that could be critical for the conscious identification of the visual target.


2008 ◽  
Vol 20 (7) ◽  
pp. 1235-1249 ◽  
Author(s):  
Roel M. Willems ◽  
Aslı Özyürek ◽  
Peter Hagoort

Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.


Author(s):  
Katherine L. Hermann ◽  
Shridhar R. Singh ◽  
Isabelle A. Rosenthal ◽  
Dimitrios Pantazis ◽  
Bevil R. Conway

Hue and luminance contrast are the most basic visual features, emerging in early layers of convolutional neural networks trained to perform object categorization. In human vision, the timing of the neural computations that extract these features, and the extent to which they are determined by the same or separate neural circuits, is unknown. We addressed these questions using multivariate analyses of human brain responses measured with magnetoencephalography. We report four discoveries. First, it was possible to decode hue tolerant to changes in luminance contrast, and luminance contrast tolerant to changes in hue, consistent with the existence of separable neural mechanisms for these features. Second, the decoding time course for luminance contrast peaked 16-24 ms before hue and showed a more prominent secondary peak corresponding to decoding of stimulus cessation. These results support the idea that the brain uses luminance contrast as an updating signal to separate events within the constant stream of visual information. Third, neural representations of hue generalized to a greater extent across time, providing a neural correlate of the preeminence of hue over luminance contrast in perceptual grouping and memory. Finally, decoding of luminance contrast was more variable across participants for hues associated with daylight (orange and blue) than for anti-daylight (green and pink), suggesting that color-constancy mechanisms reflect individual differences in assumptions about natural lighting.


2009 ◽  
Vol 101 (4) ◽  
pp. 1813-1822 ◽  
Author(s):  
P. S. Khayat ◽  
A. Pooresmaeili ◽  
P. R. Roelfsema

Neurons in the frontal eye fields (FEFs) register incoming visual information and select visual stimuli that are relevant for behavior. Here we investigated the timing of the visual response and the timing of selection by recording from single FEF neurons in a curve-tracing task that requires shifts of attention followed by an oculomotor response. We found that the behavioral selection signal in area FEF had a latency of 147 ms and that it was delayed substantially relative to the visual response, which occurred 50 ms after stimulus presentation. We compared the FEF responses to activity previously recorded in the primary visual cortex (area V1) during the same task. Visual responses in area V1 preceded the FEF responses, but the latencies of selection signals in areas V1 and FEF were similar. The similarity of timing of selection signals in structures at opposite ends of the visual cortical processing hierarchy supports the view that stimulus selection occurs in an interaction between widely separated cortical regions.


2013 ◽  
Vol 109 (12) ◽  
pp. 2883-2896 ◽  
Author(s):  
Ryan E. B. Mruczek ◽  
Isabell S. von Loga ◽  
Sabine Kastner

Humans have an amazing ability to quickly and efficiently recognize and interact with visual objects in their environment. The underlying neural processes supporting this ability have been mainly explored in the ventral visual stream. However, the dorsal stream has been proposed to play a critical role in guiding object-directed actions. This hypothesis is supported by recent neuroimaging studies that have identified object-selective and tool-related activity in human parietal cortex. In the present study, we sought to delineate tool-related information in the anterior portions of the human intraparietal sulcus (IPS) and relate it to recently identified motor-defined and topographic regions of interest (ROIs) using functional MRI in individual subjects. Consistent with previous reports, viewing pictures of tools compared with pictures of animals led to a higher blood oxygenation level-dependent (BOLD) response in the left anterior IPS. For every subject, this activation was located lateral, anterior, and inferior to topographic area IPS5 and lateral and inferior to a motor-defined human parietal grasp region (hPGR). In a separate experiment, subjects viewed pictures of tools, animals, graspable (non-tool) objects, and scrambled objects. An ROI-based time-course analysis showed that tools evoked a stronger BOLD response than animals throughout topographic regions of the left IPS. Additionally, graspable objects evoked stronger responses than animals, equal to responses to tools, in posterior regions and weaker responses than tools, equal to responses to animals, in anterior regions. Thus the left anterior tool-specific region may integrate visual information encoding graspable features of objects from more posterior portions of the IPS with experiential knowledge of object use and function to guide actions.


Sign in / Sign up

Export Citation Format

Share Document