scholarly journals Inter-individual deep image reconstruction

2022 ◽  
Author(s):  
Jun Kai Ho ◽  
Tomoyasu Horikawa ◽  
Kei Majima ◽  
Yukiyasu Kamitani

The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. While anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual contents. In this study, we evaluated machine learning models called neural code converters that predict one's brain activity pattern (target) from another's (source) given the same stimulus by the decoding of hierarchical visual features and the reconstruction of perceived images. The training data for converters consisted of fMRI data obtained with identical sets of natural images presented to pairs of individuals. Converters were trained using the whole visual cortical voxels from V1 through the ventral object areas, without explicit labels of visual areas. We decoded the converted brain activity patterns into hierarchical visual features of a deep neural network (DNN) using decoders pre-trained on the target brain and then reconstructed images via the decoded features. Without explicit information about visual cortical hierarchy, the converters automatically learned the correspondence between the visual areas of the same levels. DNN feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The viewed images were faithfully reconstructed with recognizable silhouettes of objects even with relatively small amounts of data for converter training. The conversion also allows pooling data across multiple individuals, leading to stably high reconstruction accuracy compared to those converted between individuals. These results demonstrate that the conversion learns hierarchical correspondence and preserves the fine-grained representations of visual features, enabling visual image reconstruction using decoders trained on other individuals.

2021 ◽  
Vol 15 ◽  
Author(s):  
Trung Quang Pham ◽  
Shota Nishiyama ◽  
Norihiro Sadato ◽  
Junichi Chikazoe

Multivoxel pattern analysis (MVPA) has become a standard tool for decoding mental states from brain activity patterns. Recent studies have demonstrated that MVPA can be applied to decode activity patterns of a certain region from those of the other regions. By applying a similar region-to-region decoding technique, we examined whether the information represented in the visual areas can be explained by those represented in the other visual areas. We first predicted the brain activity patterns of an area on the visual pathway from the others, then subtracted the predicted patterns from their originals. Subsequently, the visual features were derived from these residuals. During the visual perception task, the elimination of the top-down signals enhanced the simple visual features represented in the early visual cortices. By contrast, the elimination of the bottom-up signals enhanced the complex visual features represented in the higher visual cortices. The directions of such modulation effects varied across visual perception/imagery tasks, indicating that the information flow across the visual cortices is dynamically altered, reflecting the contents of visual processing. These results demonstrated that the distillation approach is a useful tool to estimate the hidden content of information conveyed across brain regions.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2009 ◽  
Vol 197 ◽  
pp. 012021 ◽  
Author(s):  
Yoichi Miyawaki ◽  
Hajime Uchida ◽  
Okito Yamashita ◽  
Masa-aki Sato ◽  
Yusuke Morito ◽  
...  

F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 142 ◽  
Author(s):  
Ayan Sengupta ◽  
Stefan Pollmann ◽  
Michael Hanke

Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.


2016 ◽  
Vol 113 (5) ◽  
pp. E606-E615 ◽  
Author(s):  
Christopher M. Lewis ◽  
Conrado A. Bosman ◽  
Thilo Womelsdorf ◽  
Pascal Fries

Intrinsic covariation of brain activity has been studied across many levels of brain organization. Between visual areas, neuronal activity covaries primarily among portions with similar retinotopic selectivity. We hypothesized that spontaneous interareal coactivation is subserved by neuronal synchronization. We performed simultaneous high-density electrocorticographic recordings across the dorsal aspect of several visual areas in one hemisphere in each of two awake monkeys to investigate spatial patterns of local and interareal synchronization. We show that stimulation-induced patterns of interareal coactivation were reactivated in the absence of stimulation for the visual quadrant covered. Reactivation occurred through both interareal cofluctuation of local activity and interareal phase synchronization. Furthermore, the trial-by-trial covariance of the induced responses recapitulated the pattern of interareal coupling observed during stimulation, i.e., the signal correlation. Reactivation-related synchronization showed distinct peaks in the theta, alpha, and gamma frequency bands. During passive states, this rhythmic reactivation was augmented by specific patterns of arrhythmic correspondence. These results suggest that networks of intrinsic covariation observed at multiple levels and with several recording techniques are related to synchronization and that behavioral state may affect the structure of intrinsic dynamics.


2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


2019 ◽  
Author(s):  
Robert Chavez ◽  
Dylan D. Wagner

Humans continually form and update impressions of each other’s identities based on the disclosure of thoughts, feelings, and beliefs. At the same time, individuals also have specific beliefs and knowledge about their own self-concept. Over a decade of social neuroscience research has shown that retrieving information about the self and about other persons recruits similar areas of the medial prefrontal cortex (MPFC), however it remains unclear if an individual’s neural representation of self is reflected in the brains of well-known others or if instead the two representations share no common relationship. Here we examined this question in a tight-knit network of friends as they engaged in a round-robin trait evaluation task in which each participant was both perceiver and target for every other participant and in addition also evaluated their self. Using functional magnetic resonance imaging and a multilevel modeling approach, we show that multivoxel brain activity patterns in the MPFC during a person’s self-referential thought are correlated with those of friends when thinking of that same person. Moreover, the similarity of neural self/other patterns was itself positively associated with the similarity of self/other trait judgments ratings as measured behaviorally in a separate session. These findings suggest that accuracy in person perception may be predicated on the degree to which the brain activity pattern associated with an individual thinking about their own self-concept is similarly reflected in the brains of others.


2021 ◽  
Author(s):  
Shinji Nishimoto

SummaryIn this paper, the process of building a model for predicting human brain activity under video viewing conditions was described as a part of an entry into the Algonauts Project 2021 Challenge. The model was designed to predict brain activity measured using functional MRI (fMRI) by weighted linear summations of the spatiotemporal visual features that appear in the video stimuli (video features). Two types of video features were used: (1) motion-energy features designed based on neurophysiological findings, and (2) features derived from a space-time vision transformer (TimeSformer). To utilize the features of various video domains, the features of the TimeSformer models pre-trained using several different movie sets were combined. Through these model building and validation processes, results showed that there is a certain correspondence between the hierarchical representation of the TimeSformer model and the hierarchical representation of the visual system in the brain. The motion-energy features are effective in predicting brain activity in the early visual areas, while TimeSformer-derived features are effective in higher-order visual areas, and a hybrid model that uses motion energy and TimeSformer features is effective for predicting whole brain activity.


2015 ◽  
Author(s):  
Christopher Lewis ◽  
Conrado Bosmann ◽  
Thilo Womelsdorf ◽  
Pascal Fries

Intrinsic covariation of brain activity has been studied across many levels of brain organization. Between visual areas, neuronal activity covaries primarily among portions with similar retinotopic selectivity. We hypothesized that spontaneous inter-areal co-activation is subserved by neuronal synchronization. We performed simultaneous high-density electrocorticographic recordings across several visual areas in awake monkeys to investigate spatial patterns of local and inter-areal synchronization. We show that stimulation-induced patterns of inter-areal co-activation were reactivated in the absence of stimulation. Reactivation occurred through both, inter-areal co-fluctuation of local activity and inter-areal phase synchronization. Furthermore, the trial-by-trial covariance of the induced responses recapitulated the pattern of inter-areal coupling observed during stimulation, i.e. the signal correlation. Reactivation-related synchronization showed distinct peaks in the theta, alpha and gamma frequency bands. During passive states, this rhythmic reactivation was augmented by specific patterns of arrhythmic correspondence. These results suggest that networks of intrinsic covariation observed at multiple levels and with several recording techniques are related to synchronization and that behavioral state may affect the structure of intrinsic dynamics.


Sign in / Sign up

Export Citation Format

Share Document