Study on Elman Neural Networks Algorithms Based on Factor Analysis

2014 ◽  
Vol 511-512 ◽  
pp. 945-949 ◽  
Author(s):  
Shao Xue Jing ◽  
Wei Kuan Jia

When we manipulate high dimensional data with Elman neural network, many characteristic variables provide enough information, but too many network inputs go against designing of the hidden-layer of the network and take up plenty of storage space as well as computing time, and in the process interfere the convergence of the training network, even influence the accuracy of recognition finally. Factor Analysis (FA) concentrates the information that is carried by numerous original indexes which form the index system, and then stores it to the factor, and can according to the precision that the actual problem needs, through controlling the number of the factors, to adjust the amount of the information. In this paper we make full use of the advantages of FA and the properties of Elman neural network structures to establish FA-Elman algorithm. The new algorithm reduces dimensions by FA, and carry on network training and simulation with low dimensional data that we get, which obviously simplifies the network structure, and in the process, improves the training speed and generalization capacity of the Elman neural network.

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


2022 ◽  
pp. 202-226
Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


2014 ◽  
Vol 2014 ◽  
pp. 1-12
Author(s):  
Shao Jie ◽  
Wang Li ◽  
Zhao WeiSong ◽  
Zhong YaQin ◽  
Reza Malekian

A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance.


Author(s):  
Hossam Eldin Ali ◽  
Yacoub M. Najjar

A backpropagation artificial neural network (ANN) algorithm with one hidden layer was used as a new numerical approach to characterize the soil liquefaction potential. For this purpose, 61 field data sets representing various earthquake sites from around the world were used. To develop the most accurate prediction model for liquefaction potential, alternating combinations of input parameters were used during the training and testing phases of the developed network. The accuracy of the designed network was validated against an additional 44 records not used previously in either the network training or testing stages. The prediction accuracy of the neural network approach–based model is compared with predictions obtained by using fuzzy logic and statistically based approaches. Overall, the ANN model outperformed all other investigated approaches.


2011 ◽  
Vol 271-273 ◽  
pp. 713-718
Author(s):  
Jie Yang ◽  
Gui Xiong Liu

Quality prediction and control methods are crucial in acquiring safe and reliable operation in process quality control. A hierarchical multiple criteria decision model is established for the key process and the weight matrix method stratified is discussed, and then KPCA is used to eliminate minor factors and to extract major factors among so many quality variables. Considering The standard Elman neural network model only effective for the low-level static system, then a new OHIF Elman is proposed in this paper, three different feedback factor are introduced into the hidden layer, associated layer, and output layer of the Elman neural network. In order to coordinate the efficiency of prediction accuracy and prediction, LM-CGD mixed algorithm is used for training the network model. The simulation and experiment results show the quality model can effectively predict the characteristic values of process quality, and it also can identify abnormal change pattern and enhance process control accuracy.


2021 ◽  
Vol 9 (4) ◽  
pp. 421-439
Author(s):  
Renquan Huang ◽  
Jing Tian

Abstract It is challenging to forecast foreign exchange rates due to the non-linear characters of the data. This paper applied a wavelet-based Elman neural network with the modified differential evolution algorithm to forecast foreign exchange rates. Elman neural network has dynamic characters because of the context layer in the structure. It makes Elman neural network suit for time series problems. The main factors, which affect the accuracy of the Elman neural network, included the transfer functions of the hidden layer and the parameters of the neural network. We applied the wavelet function to replace the sigmoid function in the hidden layer of the Elman neural network, and we found there was a “disruption problem” caused by the non-linear performance of the wavelet function. It didn’t improve the performance of the Elman neural network, but made it get worse in reverse. Then, the modified differential evolution algorithm was applied to train the parameters of the Elman neural network. To improve the optimizing performance of the differential evolution algorithm, the crossover probability and crossover factor were modified with adaptive strategies, and the local enhanced operator was added to the algorithm. According to the experiment, the modified algorithm improved the performance of the Elman neural network, and it solved the “disruption problem” of applying the wavelet function. These results show that the performance of the Elman neural network would be improved if both of the wavelet function and the modified differential evolution algorithm were applied integratedly.


Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


2012 ◽  
Vol 220-223 ◽  
pp. 2546-2554
Author(s):  
Xiao Chao Dang ◽  
Zhan Jun Hao ◽  
Yan Li

Considering such characteristics of the network system as nonlinearity, multivariate, and time variation, proposes a new improved Elman neural network model ------SIMF Elman. Seasonal periodicity learning methods are introduced into the learning process of the model. Chaotic search mechanism is introduced in the training process of the network weights. This new model uses the ergodicity of the Tent mapping to optimize the search of chaos variables, reducing data redundancy, and providing effective solution to the local convergence problem. Experimental tests of backbone network egress traffic of a certain university are conducted. The experimental results show that the new model and new algorithms can improve the network training speed and prediction accuracy of network traffic.


Sign in / Sign up

Export Citation Format

Share Document