An improved dynamic structure-based neural networks determination approaches to simulation optimization problems

2010 ◽  
Vol 19 (6) ◽  
pp. 883-901 ◽  
Author(s):  
Zheng Jun ◽  
Tan Yu-An ◽  
Zhang Xue-Lan ◽  
Lu Jun
2016 ◽  
Vol 25 (06) ◽  
pp. 1650033 ◽  
Author(s):  
Hossam Faris ◽  
Ibrahim Aljarah ◽  
Nailah Al-Madi ◽  
Seyedali Mirjalili

Evolutionary Neural Networks are proven to be beneficial in solving challenging datasets mainly due to the high local optima avoidance. Stochastic operators in such techniques reduce the probability of stagnation in local solutions and assist them to supersede conventional training algorithms such as Back Propagation (BP) and Levenberg-Marquardt (LM). According to the No-Free-Lunch (NFL), however, there is no optimization technique for solving all optimization problems. This means that a Neural Network trained by a new algorithm has the potential to solve a new set of problems or outperform the current techniques in solving existing problems. This motivates our attempts to investigate the efficiency of the recently proposed Evolutionary Algorithm called Lightning Search Algorithm (LSA) in training Neural Network for the first time in the literature. The LSA-based trainer is benchmarked on 16 popular medical diagnosis problems and compared to BP, LM, and 6 other evolutionary trainers. The quantitative and qualitative results show that the LSA algorithm is able to show not only better local solutions avoidance but also faster convergence speed compared to the other algorithms employed. In addition, the statistical test conducted proves that the LSA-based trainer is significantly superior in comparison with the current algorithms on the majority of datasets.


2006 ◽  
Vol 32 (9) ◽  
pp. 688-700 ◽  
Author(s):  
Demetrio Laganá ◽  
Pasquale Legato ◽  
Ornella Pisacane ◽  
Francesca Vocaturo

Data ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 43 ◽  
Author(s):  
Mesbaholdin Salami ◽  
Farzad Movahedi Sobhani ◽  
Mohammad Ghazizadeh

The databases of Iran’s electricity market have been storing large sizes of data. Retail buyers and retailers will operate in Iran’s electricity market in the foreseeable future when smart grids are implemented thoroughly across Iran. As a result, there will be very much larger data of the electricity market in the future than ever before. If certain methods are devised to perform quick search in such large sizes of stored data, it will be possible to improve the forecasting accuracy of important variables in Iran’s electricity market. In this paper, available methods were employed to develop a new technique of Wavelet-Neural Networks-Particle Swarm Optimization-Simulation-Optimization (WT-NNPSO-SO) with the purpose of searching in Big Data stored in the electricity market and improving the accuracy of short-term forecasting of electricity supply and demand. The electricity market data exploration approach was based on the simulation-optimization algorithms. It was combined with the Wavelet-Neural Networks-Particle Swarm Optimization (Wavelet-NNPSO) method to improve the forecasting accuracy with the assumption Length of Training Data (LOTD) increased. In comparison with previous techniques, the runtime of the proposed technique was improved in larger sizes of data due to the use of metaheuristic algorithms. The findings were dealt with in the Results section.


1990 ◽  
Vol 37 (3) ◽  
pp. 384-398 ◽  
Author(s):  
A. Rodriguez-Vazquez ◽  
R. Dominguez-Castro ◽  
A. Rueda ◽  
J.L. Huertas ◽  
E. Sanchez-Sinencio

2021 ◽  
Author(s):  
Tianyi Liu ◽  
Zhehui Chen ◽  
Enlu Zhou ◽  
Tuo Zhao

Momentum stochastic gradient descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning (e.g., training deep neural networks, variational Bayesian inference, etc.). Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.


Sign in / Sign up

Export Citation Format

Share Document