scholarly journals Shared Spatiotemporal Category Representations in Biological and Artificial Deep Neural Networks

2017 ◽  
Author(s):  
Michelle R. Greene ◽  
Bruce C. Hansen

AbstractUnderstanding the computational transformations that enable invariant visual categorization is a fundamental challenge in both systems and cognitive neuroscience. Recently developed deep convolutional neural networks (CNNs) perform visual categorization at accuracies that rival humans, providing neuroscientists with the opportunity to interrogate the series of representational transformations that enable categorization in silico. The goal of the current study is to assess the extent to which sequential visual representations built by a CNN map onto those built in the human brain as assessed by high-density, time-resolved event-related potentials (ERPs). We found correspondence both over time and across the scalp: earlier ERP activity was best explained by early CNN layers at all electrodes. Later neural activity was best explained by the later, conceptual layers of the CNN. This effect was especially true both in frontal and right occipital sites. Together, we conclude that deep artificial neural networks trained to perform scene categorization traverse similar representational stages as the human brain. Thus, examining these networks will allow neuroscientists to better understand the transformations that enable invariant visual categorization.

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

2019 ◽  
Vol 11 (4) ◽  
pp. 1 ◽  
Author(s):  
Tobias de Taillez ◽  
Florian Denk ◽  
Bojana Mirkovic ◽  
Birger Kollmeier ◽  
Bernd T. Meyer

Diferent linear models have been proposed to establish a link between an auditory stimulus and the neurophysiological response obtained through electroencephalography (EEG). We investigate if non-linear mappings can be modeled with deep neural networks trained on continuous speech envelopes and EEG data obtained in an auditory attention two-speaker scenario. An artificial neural network was trained to predict the EEG response related to the attended and unattended speech envelopes. After training, the properties of the DNN-based model are analyzed by measuring the transfer function between input envelopes and predicted EEG signals by using click-like stimuli and frequency sweeps as input patterns. Using sweep responses allows to separate the linear and nonlinear response components also with respect to attention. The responses from the model trained on normal speech resemble event-related potentials despite the fact that the DNN was not trained to reproduce such patterns. These responses are modulated by attention, since we obtain significantly lower amplitudes at latencies of 110 ms, 170 ms and 300 ms after stimulus presentation for unattended processing in contrast to the attended. The comparison of linear and nonlinear components indicates that the largest contribution arises from linear processing (75%), while the remaining 25% are attributed to nonlinear processes in the model. Further, a spectral analysis showed a stronger 5 Hz component in modeled EEG for attended in contrast to unattended predictions. The results indicate that the artificial neural network produces responses consistent with recent findings and presents a new approach for quantifying the model properties.


2017 ◽  
Author(s):  
Stefania Bracci ◽  
Ioannis Kalfas ◽  
Hans Op de Beeck

AbstractRecent studies showed agreement between how the human brain and neural networks represent objects, suggesting that we might start to understand the underlying computations. However, we know that the human brain is prone to biases at many perceptual and cognitive levels, often shaped by learning history and evolutionary constraints. Here we explore one such bias, namely the bias to perceive animacy, and used the performance of neural networks as a benchmark. We performed an fMRI study that dissociated object appearance (how an object looks like) from object category (animate or inanimate) by constructing a stimulus set that includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), and, crucially, inanimate objects that look like the animate objects (e.g., a cow-mug). Behavioral judgments and deep neural networks categorized images mainly by animacy, setting all objects (lookalike and inanimate) apart from the animate ones. In contrast, activity patterns in ventral occipitotemporal cortex (VTC) were strongly biased towards object appearance: animals and lookalikes were similarly represented and separated from the inanimate objects. Furthermore, this bias interfered with proper object identification, such as failing to signal that a cow-mug is a mug. The bias in VTC to represent a lookalike as animate was even present when participants performed a task requiring them to report the lookalikes as inanimate. In conclusion, VTC representations, in contrast to neural networks, fail to veridically represent objects when visual appearance is dissociated from animacy, probably due to a biased processing of visual features typical of animate objects.


2021 ◽  
Vol 19 (1) ◽  
pp. 1-9
Author(s):  
Ewa Wilczek-Rużyczka ◽  
Andrzej Mirski ◽  
Maciej Korab ◽  
Mariusz Trystuła

The search for neuromarkers is a very promising way to improve psychiatric and psychological care. They are now considered to be an innovative diagnostic tool in psychiatry and neuropsychology, but more broadly in all human health sciences. The aim of our study was to find the neuromarker of anxiety in a patient who had experienced a Transient IschemicAttack (TIA) of the left brain hemisphere as a result of a critical stenosis of the Internal Carotid Artery (ICA) operated on byendarterectomy (CEA). We will present the case of a 54-year-old man,an architect, who experienced a Transient Ischemic Attack (TIA) of the left brain hemispherecaused by a critical stenosis of theInternal Carotid Artery (ICA) and was treated successfully with surgical endarterectomy (CEA). One year after the surgery itself, the patient developed severe postoperative anxiety, headaches, difficulty in sleepingas well as the inability to continue working in his profession. Strong anxiety was notedon the adapted 100-millimeter Visual Analogue Anxiety Scale (VAAS). The patient was assessed using the Human Brain Index (HBI) methodology (Kropotov 2009; 2016; 2017; Pąchalska, Kaczmarek&Kropotov 2014) which consisted of recording 19-channel EEG in resting state conditions, during the cued GO/NOGO task and comparing the parameters of EEG spectra and Event-Related Potentials (ERPs) with the normative and patient databases of the Human Brain Index(HBI). No signs of cognitive dysfunction was found, however an excessive Rolandic beta was observed. In line with the working hypothesis as to the presence of an anxiety neuromarker, the patient’s studies confirmed an increased P1 time wave in the left hemisphere of the brain in ERP in response to visual stimuli, i.e. an anxiety neuromarker. Following the detection of this neuromarkera specific anodic Transcranial Direct Current Stimulations (tDCS) protocol was proposed (see: Kropotov 2016; Pąchalska, Kaczmarek & Kropotov 2020). Ten tDCS sessions were performed and the postoperativeanxiety was found to be resolved. The patient returned to work. The use of Human Brain Index (HBI) methodologyenabling the isolation of the Event Related Potentials (ERPs) patterns revealed the presence of a distinct anxietyneuromarker. Neurotherapy with the use of tDCS allowed the reduction of anxiety symptoms and the patient’s return to work. The above case study indicates the necessity to use new neurotechnologies in the diagnosis of mental diseases, with particular emphasis on postoperative anxiety.


Author(s):  
Siyu Liao ◽  
Bo Yuan

Deep neural networks (DNNs), especially deep convolutional neural networks (CNNs), have emerged as the powerful technique in various machine learning applications. However, the large model sizes of DNNs yield high demands on computation resource and weight storage, thereby limiting the practical deployment of DNNs. To overcome these limitations, this paper proposes to impose the circulant structure to the construction of convolutional layers, and hence leads to circulant convolutional layers (CircConvs) and circulant CNNs. The circulant structure and models can be either trained from scratch or re-trained from a pre-trained non-circulant model, thereby making it very flexible for different training environments. Through extensive experiments, such strong structureimposing approach is proved to be able to substantially reduce the number of parameters of convolutional layers and enable significant saving of computational cost by using fast multiplication of the circulant tensor.


1989 ◽  
Vol 98 (2) ◽  
pp. 217-221 ◽  
Author(s):  
Risto Näätänen ◽  
Petri Paavilainen ◽  
Kimmo Alho ◽  
Kalevi Reinikainen ◽  
Mikko Sams

Author(s):  
Xiayu Chen ◽  
Ming Zhou ◽  
Zhengxin Gong ◽  
Wei Xu ◽  
Xingyu Liu ◽  
...  

Deep neural networks (DNNs) have attained human-level performance on dozens of challenging tasks via an end-to-end deep learning strategy. Deep learning allows data representations that have multiple levels of abstraction; however, it does not explicitly provide any insights into the internal operations of DNNs. Deep learning's success is appealing to neuroscientists not only as a method for applying DNNs to model biological neural systems but also as a means of adopting concepts and methods from cognitive neuroscience to understand the internal representations of DNNs. Although general deep learning frameworks, such as PyTorch and TensorFlow, could be used to allow such cross-disciplinary investigations, the use of these frameworks typically requires high-level programming expertise and comprehensive mathematical knowledge. A toolbox specifically designed as a mechanism for cognitive neuroscientists to map both DNNs and brains is urgently needed. Here, we present DNNBrain, a Python-based toolbox designed for exploring the internal representations of DNNs as well as brains. Through the integration of DNN software packages and well-established brain imaging tools, DNNBrain provides application programming and command line interfaces for a variety of research scenarios. These include extracting DNN activation, probing and visualizing DNN representations, and mapping DNN representations onto the brain. We expect that our toolbox will accelerate scientific research by both applying DNNs to model biological neural systems and utilizing paradigms of cognitive neuroscience to unveil the black box of DNNs.


2019 ◽  
Author(s):  
Ronja Demel ◽  
Michael Waldmann ◽  
Annekathrin Schacht

AbstractThe influence of emotion on moral judgments has become increasingly prominent in recent years. While explicit normative measures are widely used to investigate this relationship, event-related potentials (ERPs) offer the advantage of a preconscious method to visualize the modulation of moral judgments. Based on Gray and Wegner’s (2009) Dimensional Moral Model, the present study investigated whether the processing of neutral faces is modulated by moral context information. We hypothesized that neutral faces gain emotional valence when presented in a moral context and thus elicit ERP responses comparable to those established for the processing of emotional faces. Participants (N= 26, 13 female) were tested with regard to their implicit (ERPs) and explicit (morality rating) responses to neutral faces, shown in either a morally positive, negative, or neutral context. Higher ERP amplitudes in early (P100, N170) and later (EPN, LPC) processing stages were expected for harmful/helpful scenarios compared to neutral scenarios. Agents and patients were expected to differ for moral compared to neutral scenarios. In the explicit ratings neutral scenarios were expected to differ from moral scenarios. In ERPs, we found indications for an early modulation of moral valence (harmful/helpful) and an interaction of agency and moral valence after 80-120 ms. Later time sequences showed no significant differences. Morally positive and negative scenarios were rated as significantly different from neutral scenarios. Overall, the results indicate that the relationship of emotion and moral judgments can be observed on a preconscious neural level at an early processing stage as well as in explicit judgments.


2018 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Jairo Pérez-Osorio ◽  
Davide De Tommaso ◽  
Giorgio Metta ◽  
Agnieszka Wykowska

The present study highlights the benefits of using well-controlled experimental designs, grounded in experimental psychology research and objective neuroscientific methods, for generating progress in human-robot interaction (HRI) research. In this study, we implemented a well-studied paradigm of attentional cueing through gaze (the so-called “joint attention” or “gaze cueing”) in an HRI protocol involving the iCub robot. We replicated the standard phenomenon of joint attention both in terms of behavioral measures and event-related potentials of the EEG signal. Our methodology of combining neuroscience methods with an HRI protocol opens promising avenues both for a better design of robots which are to interact with humans, and also for increasing the ecological validity of research in social and cognitive neuroscience.


Sign in / Sign up

Export Citation Format

Share Document