Forecasting the Exchange Rate Using the Improved SAPSO Neural Network

2012 ◽  
Vol 468-471 ◽  
pp. 1714-1720 ◽  
Author(s):  
Li Meng ◽  
Li Jun Dong

The paper researches on the behavior of particles in the PSO, and improves the situation of easily falling into local optimum by the right combination of simulated annealing and PSO. In the paper, the author compared the original PSO and the improved SAPSO algorithms in neural network training. The empirical research shows that the improved algorithm performed better than the PSO algorithm in global search ability, and the prediction accuracy is greatly increased.

2013 ◽  
Vol 753-755 ◽  
pp. 2930-2934
Author(s):  
Li Meng

In this study, the author focus on the exchange rate forecasting. Exchange rates fluctuation is extremely complex, not only contains the linear part but also includes non-linear elements, In this paper, Simulated Annealing Algorithm is introduced to overcome the neural network easy fall into local minimum defects in BP neural network basis, in order to optimize the network weights and thresholds, and thus improve the prediction accuracy. Through several forecast experiments about the major currencies against, the result show that compare to the single use of BP neural network, after introduced Simulated Annealing Algorithm, the prediction accuracy and stability has been further improved, meanwhile time-consuming less than genetic algorithms and other optimization algorithms.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


Sign in / Sign up

Export Citation Format

Share Document