input neuron
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Jing Peng ◽  
Ivan J Santiago ◽  
Curie Ahn ◽  
Burak Gur ◽  
C Kimberly Tsui ◽  
...  

Laminar arrangement of neural connections is a fundamental feature of neural circuit organization. Identifying mechanisms that coordinate neural connections within correct layers is thus vital for understanding how neural circuits are assembled. In the medulla of the Drosophila visual system neurons form connections within ten parallel layers. The M3 layer receives input from two neuron types that sequentially innervate M3 during development. Here we show that M3-specific innervation by both neurons is coordinated by Drosophila Fezf (dFezf), a conserved transcription factor that is selectively expressed by the earlier targeting input neuron. In this cell, dFezf instructs layer specificity and activates the expression of a secreted molecule (Netrin) that regulates the layer specificity of the other input neuron. We propose that employment of transcriptional modules that cell-intrinsically target neurons to specific layers, and cell-extrinsically recruit other neurons is a general mechanism for building layered networks of neural connections.


2017 ◽  
Author(s):  
Ivan J. Santiago ◽  
Jing Peng ◽  
Curie Ahn ◽  
Burak Gür ◽  
Katja Sporar ◽  
...  

Laminar arrangement of neural connections is a fundamental feature of neural circuit organization. Identifying mechanisms that coordinate neural connections within correct layers is thus vital for understanding how neural circuits are assembled. In the medulla of the Drosophila visual system neurons form connections within ten parallel layers. The M3 layer receives input from two neuron types that sequentially innervate M3 during development. Here we show that M3-specific innervation by both neurons is coordinated by Drosophila Fezf (dFezf), a conserved transcription factor that is selectively expressed by the earlier targeting input neuron. In this cell, dFezf instructs layer specificity and activates the expression of a secreted molecule (Netrin) that regulates the layer specificity of the other input neuron. We propose that employment of transcriptional modules that cell-intrinsically target neurons to specific layers, and cell-extrinsically recruit other neurons is a general mechanism for building layered networks of neural connections.


2017 ◽  
Vol 29 (8) ◽  
pp. 2021-2029
Author(s):  
Josue Orellana ◽  
Jordan Rodu ◽  
Robert E. Kass

Much attention has been paid to the question of how Bayesian integration of information could be implemented by a simple neural mechanism. We show that population vectors based on point-process inputs combine evidence in a form that closely resembles Bayesian inference, with each input spike carrying information about the tuning of the input neuron. We also show that population vectors can combine information relatively accurately in the presence of noisy synaptic encoding of tuning curves.


Author(s):  
Lionel Raff ◽  
Ranga Komanduri ◽  
Martin Hagan ◽  
Satish Bukkapatnam

In this section, we want to give a brief introduction to neural networks (NNs). It is written for readers who are not familiar with neural networks but are curious about how they can be applied to practical problems in chemical reaction dynamics. The field of neural networks covers a very broad area. It is not possible to discuss all types of neural networks. Instead, we will concentrate on the most common neural network architecture, namely, the multilayer perceptron (MLP). We will describe the basics of this architecture, discuss its capabilities, and show how it has been used on several different chemical reaction dynamics problems (for introductions to other types of networks, the reader is referred to References 105-107). For the purposes of this document, we will look at neural networks as function approximators. As shown in Figure 3-1, we have some unknown function that we wish to approximate. We want to adjust the parameters of the network so that it will produce the same response as the unknown function, if the same input is applied to both systems. For our applications, the unknown function may correspond to the relationship between the atomic structure variables and the resulting potential energy and forces. The multilayer perceptron neural network is built up of simple components. We will begin with a single-input neuron, which we will then extend to multiple inputs. We will next stack these neurons together to produce layers. Finally, we will cascade the layers together to form the network. A single-input neuron is shown in Figure 3-2. The scalar input p is multiplied by the scalar weight w to form wp, one of the terms that is sent to the summer. The other input, 1, is multiplied by a bias b and then passed to the summer. The summer output n, often referred to as the net input, goes into a transfer function f, which produces the scalar neuron output a.


2011 ◽  
Vol 181-182 ◽  
pp. 293-298
Author(s):  
Ji Min Yuan ◽  
Wei Gen Wu ◽  
Xin Yin

On average, each of the 1011 neurons has 1000 synaptic connections with other neurons in reality. In order to simulate a biological genuine model, the stability of a special discrete-time recurrent neural networks model that every neuron only has one input neuron is considered. And a main result is obtained. It provides some theoretical basis for the application.


Sign in / Sign up

Export Citation Format

Share Document