scholarly journals Spiking neural network model of motor cortex with joint excitatory and inhibitory clusters reflects task uncertainty, reaction times, and variability dynamics

Author(s):  
Vahid Rostami ◽  
Thomas Rost ◽  
Alexa Riehle ◽  
Sacha J. van Albada ◽  
Martin P. Nawrot

AbstractBoth neural activity and behavior of highly trained animals are strikingly variable across repetition of behavioral trials. The neural variability consistently decreases during behavioral tasks, in both sensory and motor cortices. The behavioral variability, on the other hand, changes depending on the difficulty of the task and animal performance.Here we study a mechanism for such variability in spiking neural network models with cluster topologies that enable multistability and attractor dynamics, features subserving functional roles such as decision-making, (working) memory and learning. Multistable attractors have been studied in spiking neural networks through clusters of strongly interconnected excitatory neurons. However, we show that this network topology results in the loss of excitation/inhibition balance and does not confer robustness against modulation of network activity. Moreover, it leads to widely separated firing rate states of single neurons, inconsistent with experimental observations.To overcome these problems we propose that a combination of excitatory and inhibitory clustering restores local excitation/inhibition balance. This network architecture is inspired by recent anatomical and physiological studies which point to increased local inhibitory connectivity and possible inhibitory clustering through connection strengths.We find that inhibitory clustering supports realistic spiking activity in terms of a biologically realistic firing rate, spiking irregularity, and trial-to-trial spike count variability. Furthermore, with the appropriate stimulation of network clusters, this network topology enabled us to qualitatively and quantitatively reproduce in vivo firing rate, variability dynamics and behavioral reaction times for different task conditions as observed in recordings from the motor cortex of behaving monkeys.

2021 ◽  
Vol 12 (6) ◽  
pp. 1-21
Author(s):  
Jayant Gupta ◽  
Carl Molnar ◽  
Yiqun Xie ◽  
Joe Knight ◽  
Shashi Shekhar

Spatial variability is a prominent feature of various geographic phenomena such as climatic zones, USDA plant hardiness zones, and terrestrial habitat types (e.g., forest, grasslands, wetlands, and deserts). However, current deep learning methods follow a spatial-one-size-fits-all (OSFA) approach to train single deep neural network models that do not account for spatial variability. Quantification of spatial variability can be challenging due to the influence of many geophysical factors. In preliminary work, we proposed a spatial variability aware neural network (SVANN-I, formerly called SVANN ) approach where weights are a function of location but the neural network architecture is location independent. In this work, we explore a more flexible SVANN-E approach where neural network architecture varies across geographic locations. In addition, we provide a taxonomy of SVANN types and a physics inspired interpretation model. Experiments with aerial imagery based wetland mapping show that SVANN-I outperforms OSFA and SVANN-E performs the best of all.


2019 ◽  
Vol 53 (1) ◽  
pp. 2-19 ◽  
Author(s):  
Erion Çano ◽  
Maurizio Morisio

Purpose The fabulous results of convolution neural networks in image-related tasks attracted attention of text mining, sentiment analysis and other text analysis researchers. It is, however, difficult to find enough data for feeding such networks, optimize their parameters, and make the right design choices when constructing network architectures. The purpose of this paper is to present the creation steps of two big data sets of song emotions. The authors also explore usage of convolution and max-pooling neural layers on song lyrics, product and movie review text data sets. Three variants of a simple and flexible neural network architecture are also compared. Design/methodology/approach The intention was to spot any important patterns that can serve as guidelines for parameter optimization of similar models. The authors also wanted to identify architecture design choices which lead to high performing sentiment analysis models. To this end, the authors conducted a series of experiments with neural architectures of various configurations. Findings The results indicate that parallel convolutions of filter lengths up to 3 are usually enough for capturing relevant text features. Also, max-pooling region size should be adapted to the length of text documents for producing the best feature maps. Originality/value Top results the authors got are obtained with feature maps of lengths 6–18. An improvement on future neural network models for sentiment analysis could be generating sentiment polarity prediction of documents using aggregation of predictions on smaller excerpt of the entire text.


Author(s):  
Yihao Luo ◽  
Quanzheng Yi ◽  
Tianjiang Wang ◽  
Ling Lin ◽  
Yan Xu ◽  
...  

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Zohreh Gholami Doborjeh ◽  
Nikola Kasabov ◽  
Maryam Gholami Doborjeh ◽  
Alexander Sumich

Author(s):  
Ratish Puduppully ◽  
Li Dong ◽  
Mirella Lapata

Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model1 outperforms strong baselines improving the state-of-the-art on the recently released RotoWIRE dataset.


Sign in / Sign up

Export Citation Format

Share Document