Neurodynamics and Neural Networks

Author(s):  
David D. Nolte

Individual neurons are modelled as nonlinear oscillators that rely on bistability and homoclinic orbits to produce spiking potentials. Simplified mathematical models, like the Fitzhugh–Nagumo and NaK models, capture successively more sophisticated behavior of individual neurons, such as thresholds and spiking. Artificial neurons are introduced that are composed of three simple features: summation of inputs, referencing to a threshold, and saturating output. Artificial networks of neurons are defined through specific network architectures that included the perceptron, feedforward networks with hidden layers that are trained using the Delta Rule, and recurrent networks with feedback. A prevalent example of a recurrent network is the Hopfield network, which performs operations such as associative recall. The dynamic trajectories of the Hopfield network have basins of attraction in state space that correspond to stored memories.

2003 ◽  
Vol 15 (8) ◽  
pp. 1897-1929 ◽  
Author(s):  
Barbara Hammer ◽  
Peter Tiňo

Recent experimental studies indicate that recurrent neural networks initialized with “small” weights are inherently biased toward definite memory machines (Tiňno, Čerňanský, & Beňušková, 2002a, 2002b). This article establishes a theoretical counterpart: transition function of recurrent network with small weights and squashing activation function is a contraction. We prove that recurrent networks with contractive transition function can be approximated arbitrarily well on input sequences of unbounded length by a definite memory machine. Conversely, every definite memory machine can be simulated by a recurrent network with contractive transition function. Hence, initialization with small weights induces an architectural bias into learning with recurrent neural networks. This bias might have benefits from the point of view of statistical learning theory: it emphasizes one possible region of the weight space where generalization ability can be formally proved. It is well known that standard recurrent neural networks are not distribution independent learnable in the probably approximately correct (PAC) sense if arbitrary precision and inputs are considered. We prove that recurrent networks with contractive transition function with a fixed contraction parameter fulfill the so-called distribution independent uniform convergence of empirical distances property and hence, unlike general recurrent networks, are distribution independent PAC learnable.


2006 ◽  
Vol 15 (04) ◽  
pp. 623-650
Author(s):  
JUDY A. FRANKLIN

Recurrent (neural) networks have been deployed as models for learning musical processes, by computational scientists who study processes such as dynamic systems. Over time, more intricate music has been learned as the state of the art in recurrent networks improves. One particular recurrent network, the Long Short-Term Memory (LSTM) network shows promise for learning long songs, and generating new songs. We are experimenting with a module containing two inter-recurrent LSTM networks to cooperatively learn several human melodies, based on the songs' harmonic structures, and on the feedback inherent in the network. We show that these networks can learn to reproduce four human melodies. We then present as input new harmonizations, so as to generate new songs. We describe the reharmonizations, and show the new melodies that result. We also present a hierarchical structure for using reinforcement learning to choose LSTM modules during the course of melody generation.


2006 ◽  
Vol 18 (3) ◽  
pp. 591-613 ◽  
Author(s):  
Peter Tiňo ◽  
Ashely J. S. Mills

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.


2018 ◽  
pp. 33-35
Author(s):  
Leit Akhmed Mustafa Al Ravashdekh ◽  
I. Ruzhentsev

In the work based on the analysis of application of satellite navigation systems for determining position of moving traffic objects is proposed during the processing of the digital measurement data using artificial neural network. To perform modeling of nonlinear dynamic systems, it is suggested to use recurrent network architectures and a learning algorithm based on the theory of Kalman filters.


2015 ◽  
Author(s):  
Julijana Gjorgjieva ◽  
Jan Felix Evers ◽  
Stephen Eglen

Developing neuronal networks display spontaneous rhythmic bursts of action potentials that are necessary for circuit organization and tuning. While spontaneous activity has been shown to instruct map formation in sensory circuits, it is unknown whether it plays a role in the organization of motor networks that produce rhythmic output. Using computational modeling we investigate how recurrent networks of excitatory and inhibitory neuronal populations assemble to produce robust patterns of unidirectional and precisely-timed propagating activity during organism locomotion. One example is provided by the motor network in Drosophila larvae, which generates propagating peristaltic waves of muscle contractions during crawling. We examine two activity-dependent models which tune weak network connectivity based on spontaneous activity patterns: a Hebbian model, where coincident activity in neighboring populations strengthens connections between them; and a homeostatic model, where connections are homeostatically regulated to maintain a constant level of excitatory activity based on spontaneous input. The homeostatic model tunes network connectivity to generate robust activity patterns with the appropriate timing relationships between neighboring populations. These timing relationships can be modulated by the properties of spontaneous activity suggesting its instructive role for generating functional variability in network output. In contrast, the Hebbian model fails to produce the tight timing relationships between neighboring populations required for unidirectional activity propagation, even when additional assumptions are imposed to constrain synaptic growth. These results argue that homeostatic mechanisms are more likely than Hebbian mechanisms to tune weak connectivity based on local activity patterns in a recurrent network for rhythm generation and propagation.


1996 ◽  
Vol 07 (03) ◽  
pp. 287-304
Author(s):  
DONQ-LIANG LEE ◽  
WEN-JUNE WANG

Based on the natural structure of Kosko’s Bidirectional Associative Memories (BAM), a high-performance, high-capacity associative neural model is proposed which is capable of simultaneous hetero-associative recall. The proposed model, Modified Bidirectional Decoding Strategy (MBDS), improves the recall rate by adding some association fascicles to Kosko’s BAM. The association fascicles are sparse coding neuron structures that provide activating strengths between two neuron fields (say, field X and field Y). The sufficient conditions for a state to become an equilibrium state of the MBDS network is derived. Based on these results, we discuss the basins of attraction of the training pairs in one iteration. The upper bound of the number of error bits which can be tolerated by MBDS is also derived. Because the attractivity of a stored training pair can be increased markedly with the aid of its corresponding association fascicles, we recommend a high capacity realization of MBDS, Bidirectional Holographic Memory (BHM), so that each training pair is stored uniquely and directly in the connection weights rather than encoded in a correlation matrix. Finally, computer simulations demonstrate the attractiveness of three different realizations of MBDS to verify our results.


2020 ◽  
Vol 2 (1-2) ◽  
pp. 69-96 ◽  
Author(s):  
Alexander Jakob Dautel ◽  
Wolfgang Karl Härdle ◽  
Stefan Lessmann ◽  
Hsin-Vonn Seow

Abstract Deep learning has substantially advanced the state of the art in computer vision, natural language processing, and other fields. The paper examines the potential of deep learning for exchange rate forecasting. We systematically compare long short-term memory networks and gated recurrent units to traditional recurrent network architectures as well as feedforward networks in terms of their directional forecasting accuracy and the profitability of trading model predictions. Empirical results indicate the suitability of deep networks for exchange rate forecasting in general but also evidence the difficulty of implementing and tuning corresponding architectures. Especially with regard to trading profit, a simpler neural network may perform as well as if not better than a more complex deep neural network.


2002 ◽  
Vol 12 (03n04) ◽  
pp. 247-262 ◽  
Author(s):  
ROELOF K. BROUWER

This paper defines the truncated normalized max product operation for the transformation ofstates of a network and provides a method for solving a set of equations based on this operation. The operation serves as the transformation for the set of fully connected units in a recurrent network that otherwise might consist of linear threshold units. Component values of the state vector and ouputs of the units take on the values in the set {0, 0.1, …, 0.9, 1}. The result is a much larger state space given a particular number of units and size of connection matrix than for a network based on threshold units. Since the operation defined here can form the basis of transformations in a recurrent network with a finite number of states, fixed points or cycles are possible and the network based on this operation for transformations can be used as an associative memory or pattern classifier with fixed points taking on the role of prototypes. Discrete fully recurrent networks have proven themselves to be very useful as associative memories and as classifiers. However they are often based on units that have binary states. The effect of this is that the data to be processed consisting of vectors in ℜn have to be converted to vectors in {0, 1}m with m much larger than n since binary encoding based on positional notation is not feasible. This implies a large increase in the number of components. The effect can be lessened by allowing more states for each unit in our network. The network proposed demonstrates those properties that are desirable in an associative memory very well as the simulations show.


Author(s):  
Zishu Gao ◽  
En Li ◽  
Zhe Wang ◽  
Guodong Yang ◽  
Jiwu Lu ◽  
...  

AbstractThe application of traditional 3D reconstruction methods such as structure-from-motion and simultaneous localization and mapping are typically limited by illumination conditions, surface textures, and wide baseline viewpoints in the field of robotics. To solve this problem, many researchers have applied learning-based methods with convolutional neural network architectures. However, simply utilizing convolutional neural networks without taking other measures into account is computationally intensive, and the results are not satisfying. In this study, to obtain the most informative images for reconstruction, we introduce a residual block to a 2D encoder for improved feature extraction, and propose an attentive latent unit that makes it possible to select the most informative image being fed into the network rather than choosing one at random. The recurrent visual attentive network is injected into the auto-encoder network using reinforcement learning. The recurrent visual attentive network pays more attention to useful images, and the agent will quickly predict the 3D volume. This model is evaluated based on both single- and multi-view reconstructions. The experiment results show that the recurrent visual attentive network increases prediction performance in a way that is superior to other alternative methods, and our model has desirable capacity for generalization.


2001 ◽  
Vol 6 (2) ◽  
pp. 69-99 ◽  
Author(s):  
Carl Chairella ◽  
Roberto Dieci ◽  
Laura Gardini

In this paper we consider a model of the dynamics of speculative markets involving the interaction of fundamentalists and chartists. The dynamics of the model are driven by a two-dimensional map that in the space of the parameters displays regions of invertibility and noninvertibility. The paper focuses on a study of local and global bifurcations which drastically change the qualitative structure of the basins of attraction of several, often coexistent, attracting sets. We make use of the theory of critical curves associated with noninvertible maps, as well as of homoclinic bifurcations and homoclinic orbits of saddles in regimes of invertibility.


Sign in / Sign up

Export Citation Format

Share Document