Lecture Notes in Computer Science - Brain-Inspired Computing
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 10)

H-INDEX

0
(FIVE YEARS 0)

Published By Springer International Publishing

9783030824266, 9783030824273

Author(s):  
Andrea Brandstetter ◽  
Najoua Bolakhrif ◽  
Christian Schiffer ◽  
Timo Dickscheid ◽  
Hartmut Mohlberg ◽  
...  

AbstractThe human lateral geniculate body (LGB) with its six sickle shaped layers (lam) represents the principal thalamic relay nucleus for the visual system. Cytoarchitectonic analysis serves as the groundtruth for multimodal approaches and studies exploring its function. This technique, however, requires experienced knowledge about human neuroanatomy and is costly in terms of time. Here we mapped the six layers of the LGB manually in serial, histological sections of the BigBrain, a high-resolution model of the human brain, whereby their extent was manually labeled in every 30th section in both hemispheres. These maps were then used to train a deep learning algorithm in order to predict the borders on sections in-between these sections. These delineations needed to be performed in 1 µm scans of the tissue sections, for which no exact cross-section alignment is available. Due to the size and number of analyzed sections, this requires to employ high-performance computing. Based on the serial section delineations, high-resolution 3D reconstruction was performed at 20 µm isotropic resolution of the BigBrain model. The 3D reconstruction shows the shape of the human LGB and its sublayers for the first time at cellular precision. It represents a use case to study other complex structures, to visualize their shape and relationship to neighboring structures. Finally, our results could provide reference data of the LGB for modeling and simulation to investigate the dynamics of signal transduction in the visual system.


Author(s):  
Sacha J. van Albada ◽  
Jari Pronold ◽  
Alexander van Meegen ◽  
Markus Diesmann

AbstractWe are entering an age of ‘big’ computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other’s work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of an ICT infrastructure for neuroscience.


Author(s):  
Michael Biehl

AbstractThe exchange of ideas between computer science and statistical physics has advanced the understanding of machine learning and inference significantly. This interdisciplinary approach is currently regaining momentum due to the revived interest in neural networks and deep learning. Methods borrowed from statistical mechanics complement other approaches to the theory of computational and statistical learning. In this brief review, we outline and illustrate some of the basic concepts. We exemplify the role of the statistical physics approach in terms of a particularly important contribution: the computation of typical learning curves in student teacher scenarios of supervised learning. Two, by now classical examples from the literature illustrate the approach: the learning of a linearly separable rule by a perceptron with continuous and with discrete weights, respectively. We address these prototypical problems in terms of the simplifying limit of stochastic training at high formal temperature and obtain the corresponding learning curves.


Author(s):  
Estela Suarez ◽  
Susanne Kunkel ◽  
Anne Küsters ◽  
Hans Ekkehard Plesser ◽  
Thomas Lippert

AbstractThe precise simulation of the human brain requires coupling different models in order to cover the different physiological and functional aspects of this extremely complex organ. Each of this brain models is implemented following specific mathematical and programming approaches, potentially leading to diverging computational behaviour and requirements. Such situation is the typical use case that can benefit from the Modular Supercomputing Architecture (MSA), which organizes heterogeneous computing resources at system level. This architecture and its corresponding software environment enable to run each part of an application or a workflow on the best suited hardware.This paper presents the MSA concept covering current hardware and software implementations, and describes how the neuroscientific workflow resulting of coupling the codes NEST and Arbor is being prepared to exploit the MSA.


Author(s):  
Alberto Antonietti ◽  
Claudia Casellato ◽  
Egidio D’Angelo ◽  
Alessandra Pedrocchi

AbstractNowadays, clinicians have multiple tools that they can use to stimulate the brain, by means of electric or magnetic fields that can interfere with the bio-electrical behaviour of neurons. However, it is still unclear which are the neural mechanisms that are involved and how the external stimulation changes the neural responses at network-level. In this paper, we have exploited the simulations carried out using a spiking neural network model, which reconstructed the cerebellar system, to shed light on the underlying mechanisms of cerebellar Transcranial Magnetic Stimulation affecting specific task behaviour. Namely, two computational studies have been merged and compared. The two studies employed a very similar experimental protocol: a first session of Pavlovian associative conditioning, the administration of the TMS (effective or sham), a washout period, and a second session of Pavlovian associative conditioning. In one study, the washout period between the two sessions was long (1 week), while the other study foresaw a very short washout (15 min). Computational models suggested a mechanistic explanation for the TMS effect on the cerebellum. In this work, we have found that the duration of the washout strongly changes the modification of plasticity mechanisms in the cerebellar network, then reflected in the learning behaviour.


Author(s):  
Souad Khellat-Kihel ◽  
Zhenan Sun ◽  
Massimo Tistarelli

AbstractRecent research on face analysis has demonstrated the richness of information embedded in feature vectors extracted from a deep convolutional neural network. Even though deep learning achieved a very high performance on several challenging visual tasks, such as determining the identity, age, gender and race, it still lacks a well grounded theory which allows to properly understand the processes taking place inside the network layers. Therefore, most of the underlying processes are unknown and not easy to control. On the other hand, the human visual system follows a well understood process in analyzing a scene or an object, such as a face. The direction of the eye gaze is repeatedly directed, through purposively planned saccadic movements, towards salient regions to capture several details. In this paper we propose to capitalize on the knowledge of the saccadic human visual processes to design a system to predict facial attributes embedding a biologically-inspired network architecture, the HMAX. The architecture is tailored to predict attributes with different textural information and conveying different semantic meaning, such as attributes related and unrelated to the subject’s identity. Salient points on the face are extracted from the outputs of the S2 layer of the HMAX architecture and fed to a local texture characterization module based on LBP (Local Binary Pattern). The resulting feature vector is used to perform a binary classification on a set of pre-defined visual attributes. The devised system allows to distill a very informative, yet robust, representation of the imaged faces, allowing to obtain high performance but with a much simpler architecture as compared to a deep convolutional neural network. Several experiments performed on publicly available, challenging, large datasets demonstrate the validity of the proposed approach.


Author(s):  
Kai Benning ◽  
Miriam Menzel ◽  
Jan André Reuter ◽  
Markus Axer

AbstractIn recent years, Independent Component Analysis (ICA) has successfully been applied to remove noise and artifacts in images obtained from Three-dimensional Polarized Light Imaging (3D-PLI) at the mesoscale (i.e., 64 $$\upmu $$ μ m). Here, we present an automatic denoising procedure for gray matter regions that allows to apply the ICA also to microscopic images, with reasonable computational effort. Apart from an automatic segmentation of gray matter regions, we applied the denoising procedure to several 3D-PLI images from a rat and a vervet monkey brain section.


Author(s):  
Chiara Zucco ◽  
Barbara Calabrese ◽  
Mario Cannataro

AbstractIn the last decade, Sentiment Analysis and Affective Computing have found applications in different domains. In particular, the interest of extracting emotions in healthcare is demonstrated by the various applications which encompass patient monitoring and adverse events prediction. Thanks to the availability of large datasets, most of which are extracted from social media platforms, several techniques for extracting emotion and opinion from different modalities have been proposed, using both unimodal and multimodal approaches. After introducing the basic concepts related to emotion theories, mainly borrowed from social sciences, the present work reviews three basic modalities used in emotion recognition, i.e. textual, audio and video, presenting for each of these i) some basic methodologies, ii) some among the widely used datasets for the training of supervised algorithms and iii) briefly discussing some deep Learning architectures. Furthermore, the paper outlines the challenges and existing resources to perform a multimodal emotion recognition which may improve performances by combining at least two unimodal approaches. architecture to perform multimodal emotion recognition.


Author(s):  
Sabrina Behuet ◽  
Sebastian Bludau ◽  
Olga Kedo ◽  
Christian Schiffer ◽  
Timo Dickscheid ◽  
...  

AbstractThe ‘BigBrain’ is a high-resolution data set of the human brain that enables three-dimensional (3D) analyses with a 20 µm spatial resolution at nearly cellular level. We use this data set to explore pre-α (cell) islands of layer 2 in the entorhinal cortex (EC), which are early affected in Alzheimer’s disease and have therefore been the focus of research for many years. They appear mostly in a round and elongated shape as shown in microscopic studies. Some studies suggested that islands may be interconnected based on analyses of their shape and size in two-dimensional (2D) space. Here, we characterized morphological features (shape, size, and distribution) of pre-α islands in the ‘BigBrain’, based on 3D-reconstructions of gapless series of cell-body-stained sections. The EC was annotated manually, and a machine-learning tool was trained to identify and segment islands with subsequent visualization using high-performance computing (HPC). Islands were visualized as 3D surfaces and their geometry was analyzed. Their morphology was complex: they appeared to be composed of interconnected islands of different types found in 2D histological sections of EC, with various shapes in 3D. Differences in the rostral-to-caudal part of EC were identified by specific distribution and size of islands, with implications for connectivity and function of the EC. 3D compactness analysis found more round and complex islands than elongated ones. The present study represents a use case for studying large microscopic data sets. It provides reference data for studies, e.g. investigating neurodegenerative diseases, where specific alterations in layer 2 were previously reported.


Author(s):  
Nicola Strisciuglio ◽  
Nicolai Petkov

AbstractThe study of the visual system of the brain has attracted the attention and interest of many neuro-scientists, that derived computational models of some types of neuron that compose it. These findings inspired researchers in image processing and computer vision to deploy such models to solve problems of visual data processing.In this paper, we review approaches for image processing and computer vision, the design of which is based on neuro-scientific findings about the functions of some neurons in the visual cortex. Furthermore, we analyze the connection between the hierarchical organization of the visual system of the brain and the structure of Convolutional Networks (ConvNets). We pay particular attention to the mechanisms of inhibition of the responses of some neurons, which provide the visual system with improved stability to changing input stimuli, and discuss their implementation in image processing operators and in ConvNets.


Sign in / Sign up

Export Citation Format

Share Document