scholarly journals Structural transformations of a neural network controller with a recurrent network type

Author(s):  
Aleksander Voevoda ◽  
◽  
Victor Shipagin ◽  

The complexity of the objects of regulation, as well as the increase in the requirements for the productivity of the applied regulators, leads to the complexity of the applied neural network regulators. One of the complications is the appearance of feedback loops in the regulator. That is, the transition from direct distribution networks to re-current ones. One of the problems when using them is setting up weight coefficients using methods based on gradient calculation (for example, the error propagation method, the Levenberg-Marquardt method, etc.). It manifests itself in a suddenly "dis-appearing" or "exploding" gradient, which means that the learning process of the net-work stops. The purpose of this article is to develop proposals for solving some problems of con-figuring the weight coefficients of a recurrent neural network. As methods for achieving this goal, structural transformations of the architecture of a recurrent neural network are used to bring it to the form of a direct distribution net-work. At the same time, there is a slight increase in the complexity of its architecture. For networks of direct distribution methods based on the computation of the inverse gradient can be used without modification. In the future, it is planned to increase the performance of regulating the system with the help of a converted neuro-regulator, namely, to reduce the over-regulation of the system and, after some complications of the structure, use it to regulate a nonlinear object.

Robotica ◽  
2019 ◽  
Vol 38 (8) ◽  
pp. 1450-1462
Author(s):  
Farinaz Alamiyan-Harandi ◽  
Vali Derhami ◽  
Fatemeh Jamshidi

SUMMARYThis paper tackles the challenge of the necessity of using the sequence of past environment states as the controller’s inputs in a vision-based robot navigation task. In this task, a robot has to follow a given trajectory without falling in pits and missing its balance in uneven terrain, when the only sensory input is the raw image captured by a camera. The robot should distinguish big pits from small holes to decide between avoiding and passing over. In non-Markov processes such as the abovementioned task, the decision is done using past sensory data to ensure admissible performance. Applying images as sensory inputs naturally causes the curse of dimensionality difficulty. On the other hand, using sequences of past images intensifies this difficulty. In this paper, a new framework called recurrent deep learning (RDL) with combination of deep learning (DL) and recurrent neural network is proposed to cope with the above challenge. At first, the proper features are extracted from the raw image using DL. Then, these represented features plus some expert-defined features are used as the inputs of a fully connected recurrent network (as target network) to generate command control of the robot. To evaluate the proposed RDL framework, some experiments are established on WEBOTS and MATLAB co-simulation platform. The simulation results demonstrate the proposed framework outperforms the conventional controller based on DL for the navigation task in the uneven terrains.


Author(s):  
Ahmad Ashril Rizal ◽  
Siti Soraya

Tidak tersedianya sumber daya alam seperti migas, hasil hutan ataupun industri manufaktur yang berskala besar di pulau Lombok menyebabkan pariwisata telah menjadi sektor andalan dalam pembangunan daerah. Kontribusi sektor pariwisata menunjukkan trend yang semakin meningkat dari tahun ke tahun. Dampak positif pengeluaran wisatawan terhadap perekonomian terdistribusikan ke berbagai sektor. Akan tetapi, pemerinatah daerah umumnya akan melakukan persiapan wisata daerah hanya pada saat even  lokal saja. Padahal kunjungan wisatawan bukan hanya karena faktor adanya event lokal saja. Persiapan pemerintah daerah dan pelaku wisata sangat penting untuk meningkatkan stabilitas kunjungan wisatawan. Penelitian ini mengkaji prediksi kunjungan wisatawan dengan pendekatan Recurrent Neural Network Long Short Term Memory (RNN LSTM). LSTM berisi informasi di luar aliran normal dari recurrent nertwork dalam gate cell. Cell membuat keputusan tentang apa yang harus disimpan dan kapan mengizinkan pembacaan, penulisan dan penghapusan, melalui gate yang terbuka dan tertutup. Gate menyampaikan informasi berdasarkan kekuatan yang masuk ke dalamnya dan akan difilter menjadi bobot dari gate itu sendiri. Bobot tersebut sama seperti bobot input dan hidden unit yang disesuaikan melalui proses leraning pada recurrent network. Hasil penelitian yang dilakukan dengan membangun model prediksi kunjungan wisatawan dengan RNN LSTM menggunakan multi time steps mendapatkan hasil RMSE sebesar 6888.37 pada data training dan 14684.33 pada data testing.


Author(s):  
К.П. Соловьева ◽  
K.P. Solovyeva

In this article, we describe a simple binary neuron system, which implements a self-organized map. The system consists of R input neurons (R receptors), and N output neurons of a recurrent neural network. The neural network has a quasi-continuous set of attractor states (one-dimensional “bump attractor”). Due to the dynamics of the network, each external signal (i.e. activity state of receptors) imposes transition of the recurrent network into one of its stable states (points of its attractor). That makes our system different from the “winner takes all” construction of T.Kohonen. In case, when there is a one-dimensional cyclical manifold of external signals in R-dimensional input space, and the recurrent neural network presents a complete ring of neurons with local excitatory connections, there exists a process of learning of connections between the receptors and the neurons of the recurrent network, which enables a topologically correct mapping of input signals into the stable states of the neural network. The convergence rate of learning and the role of noises and other factors affecting the described phenomenon has been evaluated in computational simulations.


2018 ◽  
Vol 30 (2) ◽  
pp. 378-396 ◽  
Author(s):  
N. F. Hardy ◽  
Dean V. Buonomano

Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency—a measure of network interconnectedness—decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.


1995 ◽  
Vol 7 (4) ◽  
pp. 822-844 ◽  
Author(s):  
Peter Tiňo ◽  
Jozef Šajda

A hybrid recurrent neural network is shown to learn small initial mealy machines (that can be thought of as translation machines translating input strings to corresponding output strings, as opposed to recognition automata that classify strings as either grammatical or nongrammatical) from positive training samples. A well-trained neural net is then presented once again with the training set and a Kohonen self-organizing map with the “star” topology of neurons is used to quantize recurrent network state space into distinct regions representing corresponding states of a mealy machine being learned. This enables us to extract the learned mealy machine from the trained recurrent network. One neural network (Kohonen self-organizing map) is used to extract meaningful information from another network (recurrent neural network).


Sign in / Sign up

Export Citation Format

Share Document