scholarly journals The physiology of perception in human temporal lobe is specialized for contextual novelty

2015 ◽  
Vol 114 (1) ◽  
pp. 256-263 ◽  
Author(s):  
Kai J. Miller ◽  
Dora Hermes ◽  
Nathan Witthoft ◽  
Rajesh P. N. Rao ◽  
Jeffrey G. Ojemann

The human ventral temporal cortex has regions that are known to selectively process certain categories of visual inputs; they are specialized for the content (“faces,” “places,” “tools”) and not the form (“line,” “patch”) of the image being seen. In our study, human patients with implanted electrocorticography (ECoG) electrode arrays were shown sequences of simple face and house pictures. We quantified neuronal population activity, finding robust face-selective sites on the fusiform gyrus and house-selective sites on the lingual/parahippocampal gyri. The magnitude and timing of single trials were compared between novel (“house-face”) and repeated (“face-face”) stimulus-type responses. More than half of the category-selective sites showed significantly greater total activity for novel stimulus class. Approximately half of the face-selective sites (and none of the house-selective sites) showed significantly faster latency to peak (∼50 ms) for novel stimulus class. This establishes subregions within category-selective areas that are differentially tuned to novelty in sequential context, where novel stimuli are processed faster in some regions, and with increased activity in others.

2016 ◽  
Vol 113 (46) ◽  
pp. E7277-E7286 ◽  
Author(s):  
Amy L. Daitch ◽  
Brett L. Foster ◽  
Jessica Schrouff ◽  
Vinitha Rangarajan ◽  
Itır Kaşikçi ◽  
...  

Brain areas within the lateral parietal cortex (LPC) and ventral temporal cortex (VTC) have been shown to code for abstract quantity representations and for symbolic numerical representations, respectively. To explore the fast dynamics of activity within each region and the interaction between them, we used electrocorticography recordings from 16 neurosurgical subjects implanted with grids of electrodes over these two regions and tracked the activity within and between the regions as subjects performed three different numerical tasks. Although our results reconfirm the presence of math-selective hubs within the VTC and LPC, we report here a remarkable heterogeneity of neural responses within each region at both millimeter and millisecond scales. Moreover, we show that the heterogeneity of response profiles within each hub mirrors the distinct patterns of functional coupling between them. Our results support the existence of multiple bidirectional functional loops operating between discrete populations of neurons within the VTC and LPC during the visual processing of numerals and the performance of arithmetic functions. These findings reveal information about the dynamics of numerical processing in the brain and also provide insight into the fine-grained functional architecture and connectivity within the human brain.


2021 ◽  
Vol 15 ◽  
Author(s):  
Takahiro Sanada ◽  
Christoph Kapeller ◽  
Michael Jordan ◽  
Johannes Grünwald ◽  
Takumi Mitsuhashi ◽  
...  

Face recognition is impaired in patients with prosopagnosia, which may occur as a side effect of neurosurgical procedures. Face selective regions on the ventral temporal cortex have been localized with electrical cortical stimulation (ECS), electrocorticography (ECoG), and functional magnetic resonance imagining (fMRI). This is the first group study using within-patient comparisons to validate face selective regions mapping, utilizing the aforementioned modalities. Five patients underwent surgical treatment of intractable epilepsy and joined the study. Subdural grid electrodes were implanted on their ventral temporal cortices to localize seizure foci and face selective regions as part of the functional mapping protocol. Face selective regions were identified in all patients with fMRI, four patients with ECoG, and two patients with ECS. From 177 tested electrode locations in the region of interest (ROI), which is defined by the fusiform gyrus and the inferior temporal gyrus, 54 face locations were identified by at least one modality in all patients. fMRI mapping showed the highest detection rate, revealing 70.4% for face selective locations, whereas ECoG and ECS identified 64.8 and 31.5%, respectively. Thus, 28 face locations were co-localized by at least two modalities, with detection rates of 89.3% for fMRI, 85.7% for ECoG and 53.6 % for ECS. All five patients had no face recognition deficits after surgery, even though five of the face selective locations, one obtained by ECoG and the other four by fMRI, were within 10 mm to the resected volumes. Moreover, fMRI included a quite large volume artifact on the ventral temporal cortex in the ROI from the anatomical structures of the temporal base. In conclusion, ECS was not sensitive in several patients, whereas ECoG and fMRI even showed activation within 10 mm to the resected volumes. Considering the potential signal drop-out in fMRI makes ECoG the most reliable tool to identify face selective locations in this study. A multimodal approach can improve the specificity of ECoG and fMRI, while simultaneously minimizing the number of required ECS sessions. Hence, all modalities should be considered in a clinical mapping protocol entailing combined results of co-localized face selective locations.


2020 ◽  
Author(s):  
D. Proklova ◽  
M.A. Goodale

AbstractAnimate and inanimate objects elicit distinct response patterns in the human ventral temporal cortex (VTC), but the exact features driving this distinction are still poorly understood. One prominent feature that distinguishes typical animals from inanimate objects and that could potentially explain the animate-inanimate distinction in the VTC is the presence of a face. In the current fMRI study, we investigated this possibility by creating a stimulus set that included animals with faces, faceless animals, and inanimate objects, carefully matched in order to minimize other visual differences. We used both searchlight-based and ROI-based representational similarity analysis (RSA) to test whether the presence of a face explains the animate-inanimate distinction in the VTC. The searchlight analysis revealed that when animals with faces were removed from the analysis, the animate-inanimate distinction almost disappeared. The ROI-based RSA revealed a similar pattern of results, but also showed that, even in the absence of faces, information about agency (a combination of animal’s ability to move and think) is present in parts of the VTC that are sensitive to animacy. Together, these analyses showed that animals with faces do elicit a stronger animate/inanimate response in the VTC, but that this effect is driven not by faces per se, or the visual features of faces, but by other factors that correlate with face presence, such as the capacity for self-movement and thought. In short, the VTC appears to treat the face as a proxy for agency, a ubiquitous feature of familiar animals.Significance StatementMany studies have shown that images of animals are processed differently from inanimate objects in the human brain, particularly in the ventral temporal cortex (VTC). However, what features drive this distinction remains unclear. One important feature that distinguishes many animals from inanimate objects is a face. Here, we used fMRI to test whether the animate/inanimate distinction is driven by the presence of faces. We found that the presence of faces did indeed boost activity related to animacy in the VTC. A more detailed analysis, however, revealed that it was the association between faces and other attributes such as the capacity for self-movement and thinking, not the faces per se, that was driving the activity we observed.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Eslam Mounier ◽  
Bassem Abdullah ◽  
Hani Mahdi ◽  
Seif Eldawlatly

AbstractThe Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing.


Sign in / Sign up

Export Citation Format

Share Document