scholarly journals Brain hierarchy score: Which deep neural networks are hierarchically brain-like?

2020 ◽  
Author(s):  
Soma Nonaka ◽  
Kei Majima ◽  
Shuntaro C. Aoki ◽  
Yukiyasu Kamitani

SummaryAchievement of human-level image recognition by deep neural networks (DNNs) has spurred interest in whether and how DNNs are brain-like. Both DNNs and the visual cortex perform hierarchical processing, and correspondence has been shown between hierarchical visual areas and DNN layers in representing visual features. Here, we propose the brain hierarchy (BH) score as a metric to quantify the degree of hierarchical correspondence based on the decoding of individual DNN unit activations from human brain activity. We find that BH scores for 29 pretrained DNNs with varying architectures are negatively correlated with image recognition performance, indicating that recently developed high-performance DNNs are not necessarily brain-like. Experimental manipulations of DNN models suggest that relatively simple feedforward architecture with broad spatial integration is critical to brain-like hierarchy. Our method provides new ways for designing DNNs and understanding the brain in consideration of their representational homology.

2021 ◽  
Author(s):  
Huawei Xu ◽  
Ming Liu ◽  
Delong Zhang

Using deep neural networks (DNNs) as models to explore the biological brain is controversial, which is mainly due to the impenetrability of DNNs. Inspired by neural style transfer, we circumvented this problem by using deep features that were given a clear meaning--the representation of the semantic content of an image. Using encoding models and the representational similarity analysis, we quantitatively showed that the deep features which represented the semantic content of an image mainly modulated the activity of voxels in the early visual areas (V1, V2, and V3) and these features were essentially depictive but also propositional. This result is in line with the core viewpoint of the grounded cognition to some extent, which suggested that the representation of information in our brain is essentially depictive and can implement symbolic functions naturally.


Author(s):  
A. Shestak ◽  
N. Filimonova

As a result of researches of 20 persons, aged 18-23 years, it was found that men under the influence of binaural beats 10 Hz, compared with binaural sound when testing a simple sensorimotor reaction was found greater activity in the frontal, central and occipital areas of both hemispheres and right temporal and parietal areas, which may be indicative about activation system imaginative and creative thinking, the need for which was absent for the implementation of a simple sensorimotor reaction. Differences in time as a simple sensorimotor reaction and choice reaction was observed. When testing, choice reaction was detected influence of binaural beats 10 Hz on the brain activity of men. In women under the influence of binaural beats 10 Hz were significantly higher speeds as a simple sensorimotor reaction and choice reaction and significantly smaller spread of latent periods of simple sensorimotor reaction. This was above the hemispheric interaction suppressed irrelevant zone and the high activity of the ascending process of attention that has provided highly specific data processing and high performance tasks compared with binaural sound.


2020 ◽  
Author(s):  
Daniele Grattarola ◽  
Lorenzo Livi ◽  
Cesare Alippi ◽  
Richard Wennberg ◽  
Taufik Valiante

Abstract Graph neural networks (GNNs) and the attention mechanism are two of the most significant advances in artificial intelligence methods over the past few years. The former are neural networks able to process graph-structured data, while the latter learns to selectively focus on those parts of the input that are more relevant for the task at hand. In this paper, we propose a methodology for seizure localisation which combines the two approaches. Our method is composed of several blocks. First, we represent brain states in a compact way by computing functional networks from intracranial electroencephalography recordings, using metrics to quantify the coupling between the activity of different brain areas. Then, we train a GNN to correctly distinguish between functional networks associated with interictal and ictal phases. The GNN is equipped with an attention-based layer which automatically learns to identify those regions of the brain (associated with individual electrodes) that are most important for a correct classification. The localisation of these regions is fully unsupervised, meaning that it does not use any prior information regarding the seizure onset zone. We report results both for human patients and for simulators of brain activity. We show that the regions of interest identified by the GNN strongly correlate with the localisation of the seizure onset zone reported by electroencephalographers. We also show that our GNN exhibits uncertainty on those patients for which the clinical localisation was also unsuccessful, highlighting the robustness of the proposed approach.


2018 ◽  
Vol 28 (4) ◽  
pp. 735-744 ◽  
Author(s):  
Michał Koziarski ◽  
Bogusław Cyganek

Abstract Due to the advances made in recent years, methods based on deep neural networks have been able to achieve a state-of-the-art performance in various computer vision problems. In some tasks, such as image recognition, neural-based approaches have even been able to surpass human performance. However, the benchmarks on which neural networks achieve these impressive results usually consist of fairly high quality data. On the other hand, in practical applications we are often faced with images of low quality, affected by factors such as low resolution, presence of noise or a small dynamic range. It is unclear how resilient deep neural networks are to the presence of such factors. In this paper we experimentally evaluate the impact of low resolution on the classification accuracy of several notable neural architectures of recent years. Furthermore, we examine the possibility of improving neural networks’ performance in the task of low resolution image recognition by applying super-resolution prior to classification. The results of our experiments indicate that contemporary neural architectures remain significantly affected by low image resolution. By applying super-resolution prior to classification we were able to alleviate this issue to a large extent as long as the resolution of the images did not decrease too severely. However, in the case of very low resolution images the classification accuracy remained considerably affected.


2021 ◽  
Author(s):  
Mohd Saqib Akhoon ◽  
Shahrel A. Suandi ◽  
Abdullah Alshahrani ◽  
Abdul‐Malik H. Y. Saad ◽  
Fahad R. Albogamy ◽  
...  

Author(s):  
Yoshihiro Hayakawa ◽  
Takanori Oonuma ◽  
Hideyuki Kobayashi ◽  
Akiko Takahashi ◽  
Shinji Chiba ◽  
...  

In deep neural networks, which have been gaining attention in recent years, the features of input images are expressed in a middle layer. Using the information on this feature layer, high performance can be demonstrated in the image recognition field. In the present study, we achieve image recognition, without using convolutional neural networks or sparse coding, through an image feature extraction function obtained when identity mapping learning is applied to sandglass-style feed-forward neural networks. In sports form analysis, for example, a state trajectory is mapped in a low-dimensional feature space based on a consecutive series of actions. Here, we discuss ideas related to image analysis by applying the above method.


2019 ◽  
Vol 14 (09) ◽  
pp. P09014-P09014 ◽  
Author(s):  
N. Nottbeck ◽  
Dr. C. Schmitt ◽  
Prof. Dr. V. Büscher

2021 ◽  
Vol 14 ◽  
Author(s):  
Hyojin Bae ◽  
Sang Jeong Kim ◽  
Chang-Eop Kim

One of the central goals in systems neuroscience is to understand how information is encoded in the brain, and the standard approach is to identify the relation between a stimulus and a neural response. However, the feature of a stimulus is typically defined by the researcher's hypothesis, which may cause biases in the research conclusion. To demonstrate potential biases, we simulate four likely scenarios using deep neural networks trained on the image classification dataset CIFAR-10 and demonstrate the possibility of selecting suboptimal/irrelevant features or overestimating the network feature representation/noise correlation. Additionally, we present studies investigating neural coding principles in biological neural networks to which our points can be applied. This study aims to not only highlight the importance of careful assumptions and interpretations regarding the neural response to stimulus features but also suggest that the comparative study between deep and biological neural networks from the perspective of machine learning can be an effective strategy for understanding the coding principles of the brain.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chi Zhang ◽  
Xiao-Han Duan ◽  
Lin-Yuan Wang ◽  
Yong-Li Li ◽  
Bin Yan ◽  
...  

Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN—AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.


Sign in / Sign up

Export Citation Format

Share Document