output neuron
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 1)

2019 ◽  
Author(s):  
Omar Hafez ◽  
Benjamin Escribano ◽  
Jan Pielage ◽  
Ernst Niebur

AbstractThe formation of an ecologically useful lasting memory requires that the brain has an accurate internal representation of the surrounding environment. In addition, it must have the ability to integrate a variety of different sensory stimuli and associate them with rewarding and aversive behavioral outcomes. Over the previous years, a number of studies have dissected the anatomy and elucidated some of the working principles of the Drosophila mushroom body (MB), the fly’s center for learning and memory. As a consequence, we now have a functional understanding of where and how in the MB sensory stimuli converge and are associated. However, the molecular and cellular dynamics at the critical synaptic intersection for this process, the Kenyon cell-mushroom body output neuron (KC-MBON) synapse, are largely unknown. Here, we introduce a first approach to understand this integration process and the physiological changes occurring at the KC-MBON synapse during Kenyon cell (KC) activation. We use the published connectome of the Drosophila MB to construct a functional computational model of the MBON-α3-A dendritic structure. We simulate synaptic input by individual KC-MBON synapses by current injections into precisely (μm) identified local dendritic sections, and the input from a model population of KCs representing an odor by a spatially distributed cluster of current injections. By recording the effect of the simulated current injections on the membrane potential of the neuron, we show that the MBON-α3-A is electrotonically compact. This suggests that odor-induced MBON activity is likely governed by input strength while the positions of KC input synapses are largely irrelevant.


Author(s):  
Chaoyou Fu ◽  
Liangchen Song ◽  
Xiang Wu ◽  
Guoli Wang ◽  
Ran He

Deep supervised hashing has become an active topic in information retrieval. It generates hashing bits by the output neurons of a deep hashing network. During binary discretization, there often exists much redundancy between hashing bits that degenerates retrieval performance in terms of both storage and accuracy. This paper proposes a simple yet effective Neurons Merging Layer (NMLayer) for deep supervised hashing. A graph is constructed to represent the redundancy relationship between hashing bits that is used to guide the learning of a hashing network. Specifically, it is dynamically learned by a novel mechanism defined in our active and frozen phases. According to the learned relationship, the NMLayer merges the redundant neurons together to balance the importance of each output neuron. Moreover, multiple NMLayers are progressively trained for a deep hashing network to learn a more compact hashing code from a long redundant code. Extensive experiments on four datasets demonstrate that our proposed method outperforms state-of-the-art hashing methods.


Author(s):  
К.Э. Никируй ◽  
А.В. Емельянов ◽  
В.В. Рыльков ◽  
А.В. Ситников ◽  
В.А. Демин

AbstractNeuromorphic computing networks (NCNs) with synapses based on memristors (resistors with memory) can provide a much more effective approach to device implementation of various network algorithms as compared to that using traditional elements based on complementary technologies. Effective NCN implementation requires that the memristor resistance can be changed according to local rules (e.g., spike-timing-dependent plasticity (STDP)). We have studied the possibility of this local learning according to STDP rules in memristors based on (Co_0.4Fe_0.4B_0.2)_ x (LiNbO_3)_1 –_ x composite. This possibility is demonstrated on the example of NCN comprising four input neurons and one output neuron. It is established that the final state of this NCN is independent of its initial state and determined entirely by the conditions of learning (sequence of spikes). Dependence of the result of learning on the threshold current of output neuron has been studied. The obtained results open prospects for creating autonomous NCNs capable of being trained to solve complex cognitive tasks.


2017 ◽  
Author(s):  
Katharina Eichler ◽  
Feng Li ◽  
Ashok Litwin-Kumar ◽  
Youngser Park ◽  
Ingrid Andrade ◽  
...  

Associating stimuli with positive or negative reinforcement is essential for survival, but a complete wiring diagram of a higherorder circuit supporting associative memory has not been previously available. We reconstructed one such circuit at synaptic resolution, theDrosophilalarval mushroom body, and found that most Kenyon cells integrate random combinations of inputs but a subset receives stereotyped inputs from single projection neurons. This organization maximizes performance of a model output neuron on a stimulus discrimination task. We also report a novel canonical circuit in each mushroom body compartment with previously unidentified connections: reciprocal Kenyon cell to modulatory neuron connections, modulatory neuron to output neuron connections, and a surprisingly high number of recurrent connections between Kenyon cells. Stereotyped connections between output neurons could enhance the selection of learned responses. The complete circuit map of the mushroom body should guide future functional studies of this learning and memory center.


2012 ◽  
Vol 27 (3) ◽  
pp. 196-205 ◽  
Author(s):  
Edward Gaten ◽  
Stephen J. Huston ◽  
Harold B. Dowse ◽  
Tom Matheson

2009 ◽  
Vol 1 (2) ◽  
pp. 73-86
Author(s):  
Julsam Julsam

This research is application of neural network technique to optimize convolution operation using mask 3x3 to omit the image blurring effect. This neural network consists of three layers.  They are input layer (9 neuron inputs), output layer (1 output neuron) and hidden layer. Each layer is applied to 3, 5 and 7 neuron using back propagation technique. The result shows the using five neurons to hidden layer give the highest value of sound pixel recognizing (76.47%)


2005 ◽  
Vol 15 (01n02) ◽  
pp. 23-30 ◽  
Author(s):  
TADASHI YAMAZAKI ◽  
SHIGERU TANAKA

We studied the dynamics of a neural network that has both recurrent excitatory and random inhibitory connections. Neurons started to become active when a relatively weak transient excitatory signal was presented and the activity was sustained due to the recurrent excitatory connections. The sustained activity stopped when a strong transient signal was presented or when neurons were disinhibited. The random inhibitory connections modulated the activity patterns of neurons so that the patterns evolved without recurrence with time. Hence, a time passage between the onsets of the two transient signals was represented by the sequence of activity patterns. We then applied this model to represent the trace eyeblink conditioning, which is mediated by the hippocampus. We assumed this model as CA3 of the hippocampus and considered an output neuron corresponding to a neuron in CA1. The activity pattern of the output neuron was similar to that of CA1 neurons during trace eyeblink conditioning, which was experimentally observed.


Sign in / Sign up

Export Citation Format

Share Document