scholarly journals Training Neural Network Elements Created From Long Shot Term Memory

2017 ◽  
Vol 10 (1) ◽  
pp. 01-10
Author(s):  
Kostantin Nikolic

This paper presents the application of stochastic search algorithms to train artificial neural networks. Methodology approaches in the work created primarily to provide training complex recurrent neural networks. It is known that training recurrent networks is more complex than the type of training feedforward neural networks. Through simulation of recurrent networks is realized propagation signal from input to output and training process achieves a stochastic search in the space of parameters. The performance of this type of algorithm is superior to most of the training algorithms, which are based on the concept of gradient. The efficiency of these algorithms is demonstrated in the training network created from units that are characterized by long term and long shot term memory of networks. The presented methology is effective and relative simple.

Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2761
Author(s):  
Vaios Ampelakiotis ◽  
Isidoros Perikos ◽  
Ioannis Hatzilygeroudis ◽  
George Tsihrintzis

In this paper, we present a handwritten character recognition (HCR) system that aims to recognize first-order logic handwritten formulas and create editable text files of the recognized formulas. Dense feedforward neural networks (NNs) are utilized, and their performance is examined under various training conditions and methods. More specifically, after three training algorithms (backpropagation, resilient propagation and stochastic gradient descent) had been tested, we created and trained an NN with the stochastic gradient descent algorithm, optimized by the Adam update rule, which was proved to be the best, using a trainset of 16,750 handwritten image samples of 28 × 28 each and a testset of 7947 samples. The final accuracy achieved is 90.13%. The general methodology followed consists of two stages: the image processing and the NN design and training. Finally, an application has been created that implements the methodology and automatically recognizes handwritten logic formulas. An interesting feature of the application is that it allows for creating new, user-oriented training sets and parameter settings, and thus new NN models.


2021 ◽  
Vol 29 (3) ◽  
Author(s):  
Bennilo Fernandes ◽  
Kasiprasad Mannepalli

Deep Neural Networks (DNN) are more than just neural networks with several hidden units that gives better results with classification algorithm in automated voice recognition activities. Then spatial correlation was considered in traditional feedforward neural networks and which do not manage speech signal properly to it extend, so recurrent neural networks (RNNs) were implemented. Long Short-Term Memory (LSTM) systems is a unique case of RNNs for speech processing, thus considering long-term dependencies Deep Hierarchical LSTM and BiLSTM is designed with dropout layers to reduce the gradient and long-term learning error in emotional speech analysis. Thus, four different combinations of deep hierarchical learning architecture Deep Hierarchical LSTM and LSTM (DHLL), Deep Hierarchical LSTM and BiLSTM (DHLB), Deep Hierarchical BiLSTM and LSTM (DHBL) and Deep Hierarchical dual BiLSTM (DHBB) is designed with dropout layers to improve the networks. The performance test of all four model were compared in this paper and better efficiency of classification is attained with minimal dataset of Tamil Language. The experimental results show that DHLB reaches the best precision of about 84% in recognition of emotions for Tamil database, however, the DHBL gives 83% of efficiency. Other design layers also show equal performance but less than the above models DHLL & DHBB shows 81% of efficiency for lesser dataset and minimal execution and training time.


2016 ◽  
Vol 25 (06) ◽  
pp. 1650033 ◽  
Author(s):  
Hossam Faris ◽  
Ibrahim Aljarah ◽  
Nailah Al-Madi ◽  
Seyedali Mirjalili

Evolutionary Neural Networks are proven to be beneficial in solving challenging datasets mainly due to the high local optima avoidance. Stochastic operators in such techniques reduce the probability of stagnation in local solutions and assist them to supersede conventional training algorithms such as Back Propagation (BP) and Levenberg-Marquardt (LM). According to the No-Free-Lunch (NFL), however, there is no optimization technique for solving all optimization problems. This means that a Neural Network trained by a new algorithm has the potential to solve a new set of problems or outperform the current techniques in solving existing problems. This motivates our attempts to investigate the efficiency of the recently proposed Evolutionary Algorithm called Lightning Search Algorithm (LSA) in training Neural Network for the first time in the literature. The LSA-based trainer is benchmarked on 16 popular medical diagnosis problems and compared to BP, LM, and 6 other evolutionary trainers. The quantitative and qualitative results show that the LSA algorithm is able to show not only better local solutions avoidance but also faster convergence speed compared to the other algorithms employed. In addition, the statistical test conducted proves that the LSA-based trainer is significantly superior in comparison with the current algorithms on the majority of datasets.


Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 102 ◽  
Author(s):  
Adrian Moldovan ◽  
Angel Caţaron ◽  
Răzvan Andonie

Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance.


2020 ◽  
Vol 44 (3) ◽  
pp. 326-332
Author(s):  
Audreaiona Waters ◽  
Liye Zou ◽  
Myungjin Jung ◽  
Qian Yu ◽  
Jingyuan Lin ◽  
...  

Objective: Sustained attention is critical for various activities of daily living, including engaging in health-enhancing behaviors and inhibition of health compromising behaviors. Sustained attention activates neural networks involved in episodic memory function, a critical cognition for healthy living. Acute exercise has been shown to activate these same neural networks. Thus, it is plausible that engaging in a sustained attention task and engaging in a bout of acute exercise may have an additive effect in enhancing memory function, which was the purpose of this experiment. Methods: 23 young adults (Mage = 20.7 years) completed 2 visits, with each visit occurring approximately 24 hours apart, in a counterbalanced order, including: (1) acute exercise with sustained attention, and (2) sustained attention only. Memory was assessed using a word-list paradigm and included a short- and long-term memory assessment. Sustained attention was induced via a sustained attention to response task (SART). Acute exercise involved a 15-minute bout of moderate-intensity exercise. Results: Short-term memory performance was significantly greater than long-term memory, Mdiff = 1.86, p < .001, and short-term memory for Exercise with Sustained Attention was significantly greater than short-term memory for Sustained Attention Only, Mdiff = 1.50, p = .01. Conclusion: Engaging in an acute bout of exercise before a sustained attention task additively influenced short-term memory function.


2020 ◽  
Vol 34 (04) ◽  
pp. 4115-4122
Author(s):  
Kyle Helfrich ◽  
Qiang Ye

Several variants of recurrent neural networks (RNNs) with orthogonal or unitary recurrent matrices have recently been developed to mitigate the vanishing/exploding gradient problem and to model long-term dependencies of sequences. However, with the eigenvalues of the recurrent matrix on the unit circle, the recurrent state retains all input information which may unnecessarily consume model capacity. In this paper, we address this issue by proposing an architecture that expands upon an orthogonal/unitary RNN with a state that is generated by a recurrent matrix with eigenvalues in the unit disc. Any input to this state dissipates in time and is replaced with new inputs, simulating short-term memory. A gradient descent algorithm is derived for learning such a recurrent matrix. The resulting method, called the Eigenvalue Normalized RNN (ENRNN), is shown to be highly competitive in several experiments.


2020 ◽  
Vol 31 (1) ◽  
Author(s):  
André Felipe Caregnato ◽  
Mayara Torres Ordonhes ◽  
Marcelo Moraes Silva ◽  
Fernando Renato Cavichiolli

The present study sought to verify the perspective of the coaches regarding the teaching, learning and training process in Brazilian Athletics. Semi-structured interviews were applied to six Brazilian athletics coaches with participation in the Olympic Games. From the interviews, two categories were defined: formation and development of athletes; talent and long-term training. About the age indicated to start in athletics, the coaches reported that the ideal (f = 56.90%) is that the athlete starts in the sport in the youth. However, they chose not to stipulate a single age group to start in the sport. Characteristics that induce the organization of a work model in athletics (f= 26.14%) were frequent in the speeches. There was a lack of specific parameters on how the profession of coaching should be exercised - in this case athletics - it is necessary to have structures that enable the development of the coach’s career.  


Sign in / Sign up

Export Citation Format

Share Document