A Novel Elman Algorithm Based on Sectional Least Squares

2011 ◽  
Vol 301-303 ◽  
pp. 695-700
Author(s):  
Wen Xue Tan ◽  
Mei Sen Pan ◽  
Xiao Rong Xu

In this paper, a novel algorithm named SLS-Elman is put forward, which aims at more effectively training and learning small scale samples with many characteristic variables, and takes Sectional Least Squares principle and some structural property of Elman neural network into consideration. When doing characteristic variable reduction on high-dimension of small scale samples, the novelty algorithm takes relativity among dependent variables into account. Obtained data by this algorithm carry on training and simulating on a neural network; outputted network is more simplified in view of structure, and presents a more precise network model. The statistics of case analysis demonstrates novelty algorithm improves convergence rate and forecast precision, and efficiency. In the mean time, on purpose to test efficiency of novelty algorithm, it is compared with some algorithms such as Elman neural network algorithm based on Principal Component Analysis and so on, as a result, it presents more advantages.

2017 ◽  
Vol 31 (2) ◽  
pp. 449-459 ◽  
Author(s):  
Weikuan Jia ◽  
Dean Zhao ◽  
Yuanjie Zheng ◽  
Sujuan Hou

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Cailing Hao

With the development of information technology, band expansion technology is gradually applied to college English listening teaching. This technology aims to recover broadband speech signals from narrowband speech signals with a limited frequency band. However, due to the limitations of current voice equipment and channel conditions, the existing voice band expansion technology often ignores the high-frequency and low-frequency correlation of the audio, resulting in excessive smoothing of the recovered high-frequency spectrum, too dull subjective hearing, and insufficient expression ability. In order to solve this problem, a neural network model PCA-NN (principal components analysis-neural network) based on principal component image analysis is proposed. Based on the nonlinear characteristics of the audio image signal, the model reduces the dimension of high-dimensional data and realizes the effective recovery of the high-frequency detailed spectrum of audio signal in phase space. The results show that the PCA-NN, i.e., neural network based on principal component analysis, is superior to other audio expansion algorithms in subjective and objective evaluation; in log spectrum distortion evaluation, PCA-NN algorithm obtains smaller LSD. Compared with EHBE, Le, and La, the average LSD decreased by 2.286 dB, 0.51 dB, and 0.15 dB, respectively. The above results show that in the image frequency band expansion of college English listening, the neural network algorithm based on principal component analysis (PCA-NN) can obtain better high-frequency reconstruction accuracy and effectively improve the audio quality.


2011 ◽  
Vol 90-93 ◽  
pp. 2173-2177
Author(s):  
Chen Cai ◽  
Tao Huang ◽  
Xun Li ◽  
Yun Zhen Li

The submarine tunnel water-inflow question has many kinds of factor synthesis influences, has highly the complexity and the misalignment, This article used the BP neural network algorithm to establish the submarine tunnel welling up water volume forecast model and to carry on the computation analysis, The result indicated that this model restraining performance is good, the forecast precision is high and simple feasible. This method has provided a new mentality for the submarine tunnel welling up water volume's forecast.


Author(s):  
Shifei Ding ◽  
Weikuan Jia ◽  
Chunyang Su ◽  
Xinzheng Xu ◽  
Liwen Zhang

Author(s):  
Deniz Erdogmus ◽  
Jose C. Principe

Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Lei Sun ◽  
Wenjun Yi ◽  
Dandan Yuan ◽  
Jun Guan

The purpose of this paper is to present an in-flight initial alignment method for the guided projectiles, obtained after launching, and utilizing the characteristic of the inertial device of a strapdown inertial navigation system. This method uses an Elman neural network algorithm, optimized by genetic algorithm in the initial alignment calculation. The algorithm is discussed in details and applied to the initial alignment process of the proposed guided projectile. Simulation results show the advantages of the optimized Elman neural network algorithm for the initial alignment problem of the strapdown inertial navigation system. It can not only obtain the same high-precision alignment as the traditional Kalman filter but also improve the real-time performance of the system.


Sign in / Sign up

Export Citation Format

Share Document