Multi-View Bayesian Generative Model for Multi-Subject FMRI Data on Brain Decoding of Viewed Image Categories

Author(s):  
Yusuke Akamatsu ◽  
Ryosuke Harakawa ◽  
Takahiro Ogawa ◽  
Miki Haseyama
Author(s):  
Ozan Yildiz ◽  
Fethiye Irmak Dogan ◽  
Ilke Oztekin ◽  
Eda Mizrak ◽  
Fatos T. Yarman Vural

2018 ◽  
Vol 65 (7) ◽  
pp. 1639-1653 ◽  
Author(s):  
Chuncheng Zhang ◽  
Li Yao ◽  
Sutao Song ◽  
Xiaotong Wen ◽  
Xiaojie Zhao ◽  
...  

2020 ◽  
Vol 68 ◽  
pp. 5769-5781
Author(s):  
Yusuke Akamatsu ◽  
Ryosuke Harakawa ◽  
Takahiro Ogawa ◽  
Miki Haseyama

2016 ◽  
Vol 35 (24) ◽  
pp. 4380-4397 ◽  
Author(s):  
Fengqing Zhang ◽  
Wenxin Jiang ◽  
Patrick Wong ◽  
Ji‐Ping Wang

2018 ◽  
Author(s):  
Ghislain St-Yves ◽  
Thomas Naselaris

AbstractWe consider the inference problem of reconstructing a visual stimulus from brain activity measurements (e.g. fMRI) that encode this stimulus. Recovering a complete image is complicated by the fact that neural representations are noisy, high-dimensional, and contain incomplete information about image details. Thus, reconstructions of complex images from brain activity require a strong prior. Here we propose to train generative adversarial networks (GANs) to learn a generative model of images that is conditioned on measurements of brain activity. We consider two challenges of this approach: First, given that GANs require far more data to train than is typically collected in an fMRI experiment, how do we obtain enough samples to train a GAN that is conditioned on brain activity? Secondly, how do we ensure that our generated samples are robust against noise present in fMRI data? Our strategy to surmount both of these problems centers around the creation of surrogate brain activity samples that are generated by an encoding model. We find that the generative model thus trained generalizes to real fRMI data measured during perception of images and is able to reconstruct the basic outline of the stimuli.


Sign in / Sign up

Export Citation Format

Share Document