Decoding human brain activity during real-world experiences

2007 ◽  
Vol 11 (8) ◽  
pp. 356-365 ◽  
Author(s):  
Hugo J. Spiers ◽  
Eleanor A. Maguire
2017 ◽  
Author(s):  
Radoslaw M. Cichy ◽  
Nikolaus Kriegeskorte ◽  
Kamila M. Jozwik ◽  
Jasper J.F. van den Bosch ◽  
Ian Charest

1AbstractVision involves complex neuronal dynamics that link the sensory stream to behaviour. To capture the richness and complexity of the visual world and the behaviour it entails, we used an ecologically valid task with a rich set of real-world object images. We investigated how human brain activity, resolved in space with functional MRI and in time with magnetoencephalography, links the sensory stream to behavioural responses. We found that behaviour-related brain activity emerged rapidly in the ventral visual pathway within 200ms of stimulus onset. The link between stimuli, brain activity, and behaviour could not be accounted for by either category membership or visual features (as provided by an artificial deep neural network model). Our results identify behaviourally-relevant brain activity during object vision, and suggest that object representations guiding behaviour are complex and can neither be explained by visual features or semantic categories alone. Our findings support the view that visual representations in the ventral visual stream need to be understood in terms of their relevance to behaviour, and highlight the importance of complex behavioural assessment for human brain mapping.


2018 ◽  
Vol 30 (11) ◽  
pp. 1559-1576 ◽  
Author(s):  
Seyed-Mahdi Khaligh-Razavi ◽  
Radoslaw Martin Cichy ◽  
Dimitrios Pantazis ◽  
Aude Oliva

Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG–fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


Science ◽  
2020 ◽  
Vol 367 (6482) ◽  
pp. 1086.8-1087
Author(s):  
Peter Stern
Keyword(s):  

1988 ◽  
Vol 35 (11) ◽  
pp. 960-966 ◽  
Author(s):  
J.C. de Munck ◽  
B.W. van Dijk ◽  
H. Spekreijse
Keyword(s):  

2006 ◽  
Vol 96 (25) ◽  
Author(s):  
Itai Doron ◽  
Eyal Hulata ◽  
Itay Baruchi ◽  
Vernon L. Towle ◽  
Eshel Ben-Jacob

NeuroImage ◽  
2000 ◽  
Vol 11 (5) ◽  
pp. 359-369 ◽  
Author(s):  
Armin Fuchs ◽  
Viktor K. Jirsa ◽  
J.A.Scott Kelso

Sign in / Sign up

Export Citation Format

Share Document