Temporal Sparseness of the Premotor Drive Is Important for Rapid Learning in a Neural Network Model of Birdsong

2004 ◽  
Vol 92 (4) ◽  
pp. 2274-2282 ◽  
Author(s):  
Ila R. Fiete ◽  
Richard H.R. Hahnloser ◽  
Michale S. Fee ◽  
H. Sebastian Seung

Sparse neural codes have been widely observed in cortical sensory and motor areas. A striking example of sparse temporal coding is in the song-related premotor area high vocal center (HVC) of songbirds: The motor neurons innervating avian vocal muscles are driven by premotor nucleus robustus archistriatalis (RA), which is in turn driven by nucleus HVC. Recent experiments reveal that RA-projecting HVC neurons fire just one burst per song motif. However, the function of this remarkable temporal sparseness has remained unclear. Because birdsong is a clear example of a learned complex motor behavior, we explore in a neural network model with the help of numerical and analytical techniques the possible role of sparse premotor neural codes in song-related motor learning. In numerical simulations with nonlinear neurons, as HVC activity is made progressively less sparse, the minimum learning time increases significantly. Heuristically, this slowdown arises from increasing interference in the weight updates for different synapses. If activity in HVC is sparse, synaptic interference is reduced, and is minimized if each synapse from HVC to RA is used only once in the motif, which is the situation observed experimentally. Our numerical results are corroborated by a theoretical analysis of learning in linear networks, for which we derive a relationship between sparse activity, synaptic interference, and learning time. If songbirds acquire their songs under significant pressure to learn quickly, this study predicts that HVC activity, currently measured only in adults, should also be sparse during the sensorimotor phase in the juvenile bird. We discuss the relevance of these results, linking sparse codes and learning speed, to other multilayered sensory and motor systems.

1999 ◽  
Vol 11 (1) ◽  
pp. 103-116 ◽  
Author(s):  
Dean V. Buonomano ◽  
Michael Merzenich

Numerous studies have suggested that the brain may encode information in the temporal firing pattern of neurons. However, little is known regarding how information may come to be temporally encoded and about the potential computational advantages of temporal coding. Here, it is shown that local inhibition may underlie the temporal encoding of spatial images. As a result of inhibition, the response of a given cell can be significantly modulated by stimulus features outside its own receptive field. Feedforward and lateral inhibition can modulate both the firing rate and temporal features, such as latency. In this article, it is shown that a simple neural network model can use local inhibition to generate temporal codes of handwritten numbers. The temporal encoding of a spatial patterns has the interesting and computationally beneficial feature of exhibiting position invariance. This work demonstrates a manner by which the nervous system may generate temporal codes and shows that temporal encoding can be used to create position-invariant codes.


1994 ◽  
Vol 6 (1) ◽  
pp. 38-55 ◽  
Author(s):  
Dean V. Buonomano ◽  
Michael D. Mauk

Substantial evidence has established that the cerebellum plays an important role in the generation of movements. An important aspect of motor output is its timing in relation to external stimuli or to other components of a movement. Previous studies suggest that the cerebellum plays a role in the timing of movements. Here we describe a neural network model based on the synaptic organization of the cerebellum that can generate timed responses in the range of tens of milliseconds to seconds. In contrast to previous models, temporal coding emerges from the dynamics of the cerebellar circuitry and depends neither on conduction delays, arrays of elements with different time constants, nor populations of elements oscillating at different frequencies. Instead, time is extracted from the instantaneous granule cell population vector. The subset of active granule cells is time-varying due to the granule—Golgi—granule cell negative feedback. We demonstrate that the population vector of simulated granule cell activity exhibits dynamic, nonperiodic trajectories in response to a periodic input. With time encoded in this manner, the output of the network at a particular interval following the onset of a stimulus can be altered selectively by changing the strength of granule → Purkinje cell connections for those granule cells that are active during the target time window. The memory of the reinforcement at that interval is subsequently expressed as a change in Purkinje cell activity that is appropriately timed with respect to stimulus onset. Thus, the present model demonstrates that a network based on cerebellar circuitry can learn appropriately timed responses by encoding time as the population vector of granule cell activity.


2020 ◽  
Vol 23 (6) ◽  
pp. 115-132
Author(s):  
D. M. Dudarenko ◽  
P. A. Smirnov

Purpose of reseach. The main purpose of this work is to increase the efficiency of a neural network model when navigating a mobile robotic platform in static and dynamically generated environments.  Methods. To solve this problem, precise setting and optimization of neural network hyperparameters were proposed. In order to encourage agents to explore the environment, the reward system was adjusted to increase the reward when the distance from the agent to the target point was reduced, and the penalty increased when moving in the opposite direction to the end point and passing each subsequent scene. This distribution of rewards and penalties encourages agents to learn actively and helps to reduce the total number of scenes. In order to reduce the amount of data processed by a neural network, normalization of input vectors was introduced. The learning time of the neural network model was reduced due to the parallel training of agents and, consequently, increased experience as a result of the environmental research. Results. The proposed approach reduced the learning time by 30% and improved the navigation efficiency of the mobile platform by 10% in a dynamically generated environment and by 22% in a static environment compared to the non-optimized model. Conclusion. The proposed solution can be used in conjunction with other methods of tracing and navigation, when the taught neural network works simultaneously with the already developed and proven navigation algorithms, for example, if the mobile platform connects a taught neural network only to adjust the position in space and to prevent collisions with other objects. 


Sign in / Sign up

Export Citation Format

Share Document