object category
Recently Published Documents


TOTAL DOCUMENTS

155
(FIVE YEARS 28)

H-INDEX

21
(FIVE YEARS 3)

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Jiedong Zhang ◽  
Yong Jiang ◽  
Yunjie Song ◽  
Peng Zhang ◽  
Sheng He

Regions sensitive to specific object categories as well as organized spatial patterns sensitive to different features have been found across the whole ventral temporal cortex (VTC). However, it is unclear that within each object category region, how specific feature representations are organized to support object identification. Would object features, such as object parts, be represented in fine-scale spatial tuning within object category-specific regions? Here, we used high-field 7T fMRI to examine the spatial tuning to different face parts within each face-selective region. Our results show consistent spatial tuning of face parts across individuals that within right posterior fusiform face area (pFFA) and right occipital face area (OFA), the posterior portion of each region was biased to eyes, while the anterior portion was biased to mouth and chin stimuli. Our results demonstrate that within the occipital and fusiform face processing regions, there exist systematic spatial tuning to different face parts that support further computation combining them.


2021 ◽  
Author(s):  
Lenia Amaral ◽  
Rita Donato ◽  
Daniela Valerio ◽  
Egas Caparelli-Daquer ◽  
Jorge Almeida ◽  
...  

The neural processing within a brain region that responds to more than one object category can be separated by looking at the horizontal modulations established by that region, which suggests that local representations can be affected by connections to distal areas, in a category-specific way. Here we first wanted to test whether by applying transcranial direct current stimulation (tDCS) to a region that responds both to hands and tools (posterior middle temporal gyrus; pMTG), while participants performed either a hand- or tool-related training task, we would be able to specifically target the trained category, and thereby dissociate the overlapping neural processing. Second, we wanted to see if these effects were limited to the target area or extended to distal but functionally connected brain areas. After each combined tDCS and training session, participants therefore viewed images of tools, hands, and animals, in an fMRI scanner. Using multivoxel pattern analysis, we found that tDCS stimulation to pMTG indeed improved the classification accuracy between tools vs. animals, but only when combined with a tool training task (not a hand training task). However, surprisingly, tDCS stimulation to pMTG also improved the classification accuracy between hands vs. animals when combined with a tool training task (not a hand training task). Our findings suggest that overlapping but functionally-specific networks can be separated by using a category-specific training task together with tDCS - a strategy that can be applied more broadly to other cognitive domains using tDCS - and demonstrates the importance of horizontal modulations in object-category representations.


2021 ◽  
Author(s):  
Christopher Xie ◽  
Keunhong Park ◽  
Ricardo Martin-Brualla ◽  
Matthew Brown
Keyword(s):  

2021 ◽  
Author(s):  
Polina Iamshchinina ◽  
Agnessa Karapetian ◽  
Daniel Kaiser ◽  
Radoslaw Martin Cichy

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study we used EEG (N=47) and time-resolved multivariate pattern analysis to investigate (1) the time course with which object category information emerges in the auditory modality and (2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that (1) that auditory object category representations can be reliably extracted from EEG signals and (2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, we did not find evidence for a shared supra-modal code, suggesting that the contents of the different sensory hierarchies are ultimately modality-unique.


2021 ◽  
pp. 103911
Author(s):  
H. Ayoobi ◽  
H. Kasaei ◽  
M. Cao ◽  
R. Verbrugge ◽  
B. Verheij
Keyword(s):  

2021 ◽  
Vol 21 (9) ◽  
pp. 2881
Author(s):  
Brett Bankson ◽  
Michael Ward ◽  
Edward Silson ◽  
Chris Baker ◽  
R. Mark Richardson ◽  
...  

2021 ◽  
Author(s):  
Jiedong Zhang ◽  
Yong Jiang ◽  
Yunjie Song ◽  
Peng Zhang ◽  
Sheng He

Regions sensitive to specific object categories as well as organized spatial patterns sensitive to different features have been found across the whole ventral temporal cortex (VTC). However, it is unclear that within each object category region, how specific feature representations are organized to support object identification. Would object features, such as object parts, be represented in fine-scale spatial organization within object category-specific regions? Here we used high-field 7T fMRI to examine the spatial organization of neural tuning to different face parts within each face-selective region. Our results show consistent spatial organization across individuals that within right posterior fusiform face area (pFFA) and right occipital face area (OFA), the posterior portion of each region was biased to eyes, while the anterior portion was biased to mouth and chin stimuli. Our results demonstrate that within the occipital and fusiform face processing regions, there exist systematic spatial organizations of neural tuning to different face parts that support further computation combining them.


2020 ◽  
Author(s):  
Mozhgan Shahmohammadi ◽  
Ehsan Vahab ◽  
Hamid Karimi-Rouzbahani

AbstractIn order to develop object recognition algorithms, which can approach human-level recognition performance, researchers have been studying how the human brain performs recognition in the past five decades. This has already in-spired AI-based object recognition algorithms, such as convolutional neural networks, which are among the most successful object recognition platforms today and can approach human performance in specific tasks. However, it is not yet clearly known how recorded brain activations convey information about object category processing. One main obstacle has been the lack of large feature sets, to evaluate the information contents of multiple aspects of neural activations. Here, we compared the information contents of a large set of 25 features, extracted from time series of electroencephalography (EEG) recorded from human participants doing an object recognition task. We could characterize the most informative aspects of brain activations about object categories. Among the evaluated features, event-related potential (ERP) components of N1 and P2a were among the most informative features with the highest information in the Theta frequency bands. Upon limiting the analysis time window, we observed more information for features detecting temporally informative patterns in the signals. The results of this study can constrain previous theories about how the brain codes object category information.


Sign in / Sign up

Export Citation Format

Share Document