scholarly journals Neuromodulatory connectivity defines the structure of a behavioral neural network

eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Feici Diao ◽  
Amicia D Elliott ◽  
Fengqiu Diao ◽  
Sarav Shah ◽  
Benjamin H White

Neural networks are typically defined by their synaptic connectivity, yet synaptic wiring diagrams often provide limited insight into network function. This is due partly to the importance of non-synaptic communication by neuromodulators, which can dynamically reconfigure circuit activity to alter its output. Here, we systematically map the patterns of neuromodulatory connectivity in a network that governs a developmentally critical behavioral sequence in Drosophila. This sequence, which mediates pupal ecdysis, is governed by the serial release of several key factors, which act both somatically as hormones and within the brain as neuromodulators. By identifying and characterizing the functions of the neuronal targets of these factors, we find that they define hierarchically organized layers of the network controlling the pupal ecdysis sequence: a modular input layer, an intermediate central pattern generating layer, and a motor output layer. Mapping neuromodulatory connections in this system thus defines the functional architecture of the network.

2010 ◽  
Vol 61 (2) ◽  
pp. 120-124 ◽  
Author(s):  
Ladislav Zjavka

Generalization of Patterns by Identification with Polynomial Neural Network Artificial neural networks (ANN) in general classify patterns according to their relationship, they are responding to related patterns with a similar output. Polynomial neural networks (PNN) are capable of organizing themselves in response to some features (relations) of the data. Polynomial neural network for dependence of variables identification (D-PNN) describes a functional dependence of input variables (not entire patterns). It approximates a hyper-surface of this function with multi-parametric particular polynomials forming its functional output as a generalization of input patterns. This new type of neural network is based on GMDH polynomial neural network and was designed by author. D-PNN operates in a way closer to the brain learning as the ANN does. The ANN is in principle a simplified form of the PNN, where the combinations of input variables are missing.


2006 ◽  
Vol 6 ◽  
pp. 992-997 ◽  
Author(s):  
Alison M. Kerr

More than 20 years of clinical and research experience with affected people in the British Isles has provided insight into particular challenges for therapists, educators, or parents wishing to facilitate learning and to support the development of skills in people with Rett syndrome. This paper considers the challenges in two groups: those due to constraints imposed by the disabilities associated with the disorder and those stemming from the opportunities, often masked by the disorder, allowing the development of skills that depend on less-affected areas of the brain. Because the disorder interferes with the synaptic links between neurones, the functions of the brain that are most dependent on complex neural networks are the most profoundly affected. These functions include speech, memory, learning, generation of ideas, and the planning of fine movements, especially those of the hands. In contrast, spontaneous emotional and hormonal responses appear relatively intact. Whereas failure to appreciate the physical limitations of the disease leads to frustration for therapist and client alike, a clear understanding of the better-preserved areas of competence offers avenues for real progress in learning, the building of satisfying relationships, and achievement of a quality of life.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1365
Author(s):  
Bogdan Muşat ◽  
Răzvan Andonie

Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2013 ◽  
Vol 7 (1) ◽  
pp. 49-62 ◽  
Author(s):  
Vijaykumar Sutariya ◽  
Anastasia Groshev ◽  
Prabodh Sadana ◽  
Deepak Bhatia ◽  
Yashwant Pathak

Artificial neural networks (ANNs) technology models the pattern recognition capabilities of the neural networks of the brain. Similarly to a single neuron in the brain, artificial neuron unit receives inputs from many external sources, processes them, and makes decisions. Interestingly, ANN simulates the biological nervous system and draws on analogues of adaptive biological neurons. ANNs do not require rigidly structured experimental designs and can map functions using historical or incomplete data, which makes them a powerful tool for simulation of various non-linear systems.ANNs have many applications in various fields, including engineering, psychology, medicinal chemistry and pharmaceutical research. Because of their capacity for making predictions, pattern recognition, and modeling, ANNs have been very useful in many aspects of pharmaceutical research including modeling of the brain neural network, analytical data analysis, drug modeling, protein structure and function, dosage optimization and manufacturing, pharmacokinetics and pharmacodynamics modeling, and in vitro in vivo correlations. This review discusses the applications of ANNs in drug delivery and pharmacological research.


Author(s):  
Thomas P. Trappenberg

This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins by outlining the basic operations of neurons in the brain and how these operations are abstracted by simple neuron models. It then builds networks of artificial neurons that constitute much of the recent success of AI. The focus of this chapter is on using such techniques, with subsequent consideration of their theoretical embedding.


2012 ◽  
Vol 263-266 ◽  
pp. 3374-3377
Author(s):  
Hua Liang Wu ◽  
Zhen Dong Mu ◽  
Jian Feng Hu

In the application of the classification, neural networks are often used as a classification tool, In this paper, neural network is introduced on motor imagery EEG analysis, the first EEG Hjort conversion, and then the brain electrical signal is converted into the frequency domain, Finally, the fisher distance for feature extraction in the EEG analysis, identification of the study sample was 97 86% recognition rate is 80% of the test sample.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Xiaoming Yu ◽  
Yedan Shen ◽  
Yuan Ni ◽  
Xiaowei Huang ◽  
Xiaolong Wang ◽  
...  

Abstract Background Text Matching (TM) is a fundamental task of natural language processing widely used in many application systems such as information retrieval, automatic question answering, machine translation, dialogue system, reading comprehension, etc. In recent years, a large number of deep learning neural networks have been applied to TM, and have refreshed benchmarks of TM repeatedly. Among the deep learning neural networks, convolutional neural network (CNN) is one of the most popular networks, which suffers from difficulties in dealing with small samples and keeping relative structures of features. In this paper, we propose a novel deep learning architecture based on capsule network for TM, called CapsTM, where capsule network is a new type of neural network architecture proposed to address some of the short comings of CNN and shows great potential in many tasks. Methods CapsTM is a five-layer neural network, including an input layer, a representation layer, an aggregation layer, a capsule layer and a prediction layer. In CapsTM, two pieces of text are first individually converted into sequences of embeddings and are further transformed by a highway network in the input layer. Then, Bidirectional Long Short-Term Memory (BiLSTM) is used to represent each piece of text and attention-based interaction matrix is used to represent interactive information of the two pieces of text in the representation layer. Subsequently, the two kinds of representations are fused together by BiLSTM in the aggregation layer, and are further represented with capsules (vectors) in the capsule layer. Finally, the prediction layer is a connected network used for classification. CapsTM is an extension of ESIM by adding a capsule layer before the prediction layer. Results We construct a corpus of Chinese medical question matching, which contains 36,360 question pairs. This corpus is randomly split into three parts: a training set of 32,360 question pairs, a development set of 2000 question pairs and a test set of 2000 question pairs. On this corpus, we conduct a series of experiments to evaluate the proposed CapsTM and compare it with other state-of-the-art methods. CapsTM achieves the highest F-score of 0.8666. Conclusion The experimental results demonstrate that CapsTM is effective for Chinese medical question matching and outperforms other state-of-the-art methods for comparison.


2018 ◽  
Author(s):  
Sutedi Sutedi

Diabetes Melitus (DM) is dangerous disease that affect many of the variouslayer of work society. This disease is not easy to accurately recognized by thegeneral society. So we need to develop a system that can identify accurately. Systemis built using neural networks with backpropagation methods and the functionactivation sigmoid. Neural network architecture using 8 input layer, 2 output layerand 5 hidden layer. The results show that this methods succesfully clasifies datadiabetics and non diabetics with near 100% accuracy rate.


Author(s):  
Zahra A. Shirazi ◽  
Camila P. E. de Souza ◽  
Rasha Kashef ◽  
Felipe F. Rodrigues

Artificial Neural networks (ANN) are composed of nodes that are joint to each other through weighted connections. Deep learning, as an extension of ANN, is a neural network model, but composed of different categories of layers: input layer, hidden layers, and output layers. Input data is fed into the first (input) layer. But the main process of the neural network models is done within the hidden layers, ranging from a single hidden layer to multiple ones. Depending on the type of model, the structure of the hidden layers is different. Depending on the type of input data, different models are applied. For example, for image data, convolutional neural networks are the most appropriate. On the other hand, for text or sequential and time series data, recurrent neural networks or long short-term memory models are the better choices. This chapter summarizes the state-of-the-art deep learning methods applied to the healthcare industry.


Sign in / Sign up

Export Citation Format

Share Document