scholarly journals Category Decoding of Visual Stimuli From Human Brain Activity Using a Bidirectional Recurrent Neural Network to Simulate Bidirectional Information Flows in Human Visual Cortices

2019 ◽  
Vol 13 ◽  
Author(s):  
Kai Qiao ◽  
Jian Chen ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Lei Zeng ◽  
...  
Fractals ◽  
2018 ◽  
Vol 26 (05) ◽  
pp. 1850069 ◽  
Author(s):  
MOHAMMAD ALI AHMADI-PAJOUH ◽  
TIRDAD SEIFI ALA ◽  
FATEMEH ZAMANIAN ◽  
HAMIDREZA NAMAZI ◽  
SAJAD JAFARI

Analysis of human behavior is one of the major research topics in neuroscience. It is known that human behavior is related to his brain activity. In this way, the analysis of human brain activity is the root for analysis of his behavior. Electroencephalography (EEG) as one of the most famous methods for measuring brain activity generates a chaotic signal, which has fractal characteristic. This study reveals the relation between the fractal structure (complexity) of human EEG signal and the applied visual stimuli. For this purpose, we chose two types of visual stimuli, namely, living and non-living visual stimuli. We demonstrate that the fractal structure of human EEG signal changes significantly between living versus non-living visual stimuli. The capability observed in this research can be applied to other kinds of stimuli in order to classify the brain response based on the types of stimuli.


2017 ◽  
Author(s):  
Guohua Shen ◽  
Tomoyasu Horikawa ◽  
Kei Majima ◽  
Yukiyasu Kamitani

AbstractMachine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases (Miyawaki et al., 2008; Wen et al., 2016) or to the matching to exemplars (Naselaris et al., 2009; Nishimoto et al., 2011). Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features (Horikawa & Kamitani, 2017). Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.


Author(s):  
Н.С. Фролов ◽  
А.Н. Писарчик

AbstractWe propose a method for the diagnostics of human brain states using MEG records and an artificial neural-network apparatus. It is shown that this approach allows various states of the human brain to be classified in the case of making decisions related to the perception of visual stimuli.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Shuntaro C. Aoki ◽  
Mitsuaki Tsukamoto ◽  
Yukiyasu Kamitani

2018 ◽  
Vol 30 (2) ◽  
pp. 378-396 ◽  
Author(s):  
N. F. Hardy ◽  
Dean V. Buonomano

Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency—a measure of network interconnectedness—decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.


2018 ◽  
Author(s):  
Amir Dezfouli ◽  
Richard Morris ◽  
Fabio Ramos ◽  
Peter Dayan ◽  
Bernard W. Balleine

AbstractNeuroscience studies of human decision-making abilities commonly involve sub-jects completing a decision-making task while BOLD signals are recorded using fMRI. Hypotheses are tested about which brain regions mediate the effect of past experience, such as rewards, on future actions. One standard approach to this is model-based fMRI data analysis, in which a model is fitted to the behavioral data, i.e., a subject’s choices, and then the neural data are parsed to find brain regions whose BOLD signals are related to the model’s internal signals. However, the internal mechanics of such purely behavioral models are not constrained by the neural data, and therefore might miss or mischaracterize aspects of the brain. To address this limitation, we introduce a new method using recurrent neural network models that are flexible enough to be jointly fitted to the behavioral and neural data. We trained a model so that its internal states were suitably related to neural activity during the task, while at the same time its output predicted the next action a subject would execute. We then used the fitted model to create a novel visualization of the relationship between the activity in brain regions at different times following a reward and the choices the subject subsequently made. Finally, we validated our method using a previously published dataset. We found that the model was able to recover the underlying neural substrates that were discovered by explicit model engineering in the previous work, and also derived new results regarding the temporal pattern of brain activity.


Sign in / Sign up

Export Citation Format

Share Document