Faculty Opinions recommendation of Visual image reconstruction from human brain activity using a combination of multiscale local image decoders.

Author(s):  
Jitendra Sharma ◽  
Scott Gorlin
Neuron ◽  
2008 ◽  
Vol 60 (5) ◽  
pp. 915-929 ◽  
Author(s):  
Yoichi Miyawaki ◽  
Hajime Uchida ◽  
Okito Yamashita ◽  
Masa-aki Sato ◽  
Yusuke Morito ◽  
...  

2009 ◽  
Vol 197 ◽  
pp. 012021 ◽  
Author(s):  
Yoichi Miyawaki ◽  
Hajime Uchida ◽  
Okito Yamashita ◽  
Masa-aki Sato ◽  
Yusuke Morito ◽  
...  

Author(s):  
Guohua Shen ◽  
Kshitij Dwivedi ◽  
Kei Majima ◽  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


2022 ◽  
Author(s):  
Jun Kai Ho ◽  
Tomoyasu Horikawa ◽  
Kei Majima ◽  
Yukiyasu Kamitani

The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. While anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual contents. In this study, we evaluated machine learning models called neural code converters that predict one's brain activity pattern (target) from another's (source) given the same stimulus by the decoding of hierarchical visual features and the reconstruction of perceived images. The training data for converters consisted of fMRI data obtained with identical sets of natural images presented to pairs of individuals. Converters were trained using the whole visual cortical voxels from V1 through the ventral object areas, without explicit labels of visual areas. We decoded the converted brain activity patterns into hierarchical visual features of a deep neural network (DNN) using decoders pre-trained on the target brain and then reconstructed images via the decoded features. Without explicit information about visual cortical hierarchy, the converters automatically learned the correspondence between the visual areas of the same levels. DNN feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The viewed images were faithfully reconstructed with recognizable silhouettes of objects even with relatively small amounts of data for converter training. The conversion also allows pooling data across multiple individuals, leading to stably high reconstruction accuracy compared to those converted between individuals. These results demonstrate that the conversion learns hierarchical correspondence and preserves the fine-grained representations of visual features, enabling visual image reconstruction using decoders trained on other individuals.


2018 ◽  
Vol 316 ◽  
pp. 202-209 ◽  
Author(s):  
Wei Huang ◽  
Hongmei Yan ◽  
Ran Liu ◽  
Lixia Zhu ◽  
Huangbin Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document