scholarly journals Recursive Neural Networks Based on PSO for Image Parsing

2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Guo-Rong Cai ◽  
Shui-Li Chen

This paper presents an image parsing algorithm which is based on Particle Swarm Optimization (PSO) and Recursive Neural Networks (RNNs). State-of-the-art method such as traditional RNN-based parsing strategy uses L-BFGS over the complete data for learning the parameters. However, this could cause problems due to the nondifferentiable objective function. In order to solve this problem, the PSO algorithm has been employed to tune the weights of RNN for minimizing the objective. Experimental results obtained on the Stanford background dataset show that our PSO-based training algorithm outperforms traditional RNN, Pixel CRF, region-based energy, simultaneous MRF, and superpixel MRF.

Author(s):  
Kuruge Darshana Abeyrathna ◽  
Chawalit Jeenanunta

Particle Swarm Optimization (PSO) is popular for solving complex optimization problems. However, it easily traps in local minima. Authors modify the traditional PSO algorithm by adding an extra step called PSO-Shock. The PSO-Shock algorithm initiates similar to the PSO algorithm. Once it traps in a local minimum, it is detected by counting stall generations. When stall generation accumulates to a prespecified value, particles are perturbed. This helps particles to find better solutions than the current local minimum they found. The behavior of PSO-Shock algorithm is studied using a known: Schwefel's function. With promising performance on the Schwefel's function, PSO-Shock algorithm is utilized to optimize the weights and bias of Artificial Neural Networks (ANNs). The trained ANNs then forecast electricity consumption in Thailand. The proposed algorithm reduces the forecasting error compared to the traditional training algorithms. The percentage reduction of error is 23.81% compared to the Backpropagation algorithm and 16.50% compared to the traditional PSO algorithm.


2013 ◽  
Vol 581 ◽  
pp. 511-516
Author(s):  
Uros Zuperl ◽  
Franci Cus

In this paper, optimization system based on the artificial neural networks (ANN) and particle swarm optimization (PSO) algorithm was developed for the optimization of machining parameters for turning operation. The optimization system integrates the neural network modeling of the objective function and particle swarm optimization of turning parameters. New neural network assisted PSO algorithm is explained in detail. An objective function based on maximum profit, minimum costs and maximum cutting quality in turning operation has been used. This paper also exhibits the efficiency of the proposed optimization over the genetic algorithms (GA), ant colony optimization (ACO) and simulated annealing (SA).


2018 ◽  
Vol 1 (1) ◽  
pp. 157-191 ◽  
Author(s):  
Saptarshi Sengupta ◽  
Sanchita Basak ◽  
Richard Peters

Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems that cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment, and improvements of its most basic as well as some of the very recent state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.


2017 ◽  
Vol 24 (1) ◽  
pp. 101-112 ◽  
Author(s):  
Qin Zheng ◽  
Zubin Yang ◽  
Jianxin Sha ◽  
Jun Yan

Abstract. In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.


2013 ◽  
Vol 313-314 ◽  
pp. 1327-1330
Author(s):  
Ming Guang Zhang ◽  
Yun Yun Lu

This paper introduces the particle swarm Optimization (PSO) algorithm and the ant colony optimization (ASO) algorithm to solve the optimal problem of distribution network restoration reconfiguration. In the solving process, propose a multi objective function which contains the minimize line losses and minimize number of switching operations. In the time, simplify distribution and received the feasible searching strategy improved the searching speed. Because parameters has an important influence on ACO algorithm, this method use the PSO optimizate the parameters α and β of ACO. Take the parameters as the position of PSO. This method overcomed the parameters on the influence of the ACO and improved the global searching capability. Simulation on a typical 33 node case is conducted, the results show this method has a good performance of global searching.


Author(s):  
E. Mohammadi ◽  
M. Montazeri-Gh ◽  
P. Khalaf

This paper presents the metaheuristic design and optimization of fuzzy-based gas turbine engine (GTE) fuel flow controller by means of a hybrid invasive weed optimization/particle swarm optimization (IWO/PSO) algorithm as an innovative guided search technique. In this regard, first, a Wiener model for the GTE as a block-structured model is developed and validated against experimental data. Subsequently, because of the nonlinear nature of GTE, a fuzzy logic controller (FLC) strategy is considered for the engine fuel system. For this purpose, an initial FLC is designed and the parameters are then tuned using a hybrid IWO/PSO algorithm where the tuning process is formulated as an engineering optimization problem. The fuel consumption, engine safety, and time response are the performance indices of the defined objective function. In addition, two sets of weighting factors for objective function are considered, whereas in one of them savings in fuel consumption and in another achieving a short response time for the engine is a priority. Moreover, the optimization process is performed in two stages during which the database and the rule base of the initial FLC are tuned sequentially. The simulation results confirm that the IWO/PSO-FLC approach is effective for GTE fuel controller design, resulting in improved engine performance as well as ensuring engine protection against physical limitations.


2016 ◽  
Author(s):  
Qin Zheng ◽  
Zubin Yang ◽  
Jianxin Sha ◽  
Jun Yan

Abstract. In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies the certain constraint condition and causes the largest prediction error at the prediction time. The CNOP method has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that: (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model on the prediction variable will lead to failure of the ADJ-CNOP method; (2) when the objective function has multiple extreme values, ADJ-CNOP has large probability to produce local CNOPs, hence make false estimation on the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithms, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. Besides, to check the estimation accuracy of the LBMPT presented by the PSO-CNOP and the ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes, and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.


2019 ◽  
Vol 10 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Kuruge Darshana Abeyrathna ◽  
Chawalit Jeenanunta

This research proposes a new training algorithm for artificial neural networks (ANNs) to improve the short-term load forecasting (STLF) performance. The proposed algorithm overcomes the so-called training issue in ANNs, where it traps in local minima, by applying genetic algorithm operations in particle swarm optimization when it converges to local minima. The training ability of the hybridized training algorithm is evaluated using load data gathered by Electricity Generating Authority of Thailand. The ANN is trained using the new training algorithm with one-year data to forecast equal 48 periods of each day in 2013. During the testing phase, a mean absolute percentage error (MAPE) is used to evaluate performance of the hybridized training algorithm and compare them with MAPEs from Backpropagation, GA, and PSO. Yearly average MAPE and the average MAPEs for weekdays, Mondays, weekends, Holidays, and Bridging holidays show that PSO+GA algorithm outperforms other training algorithms for STLF.


2022 ◽  
pp. 227-241
Author(s):  
Kuruge Darshana Abeyrathna ◽  
Chawalit Jeenanunta

This research proposes a new training algorithm for artificial neural networks (ANNs) to improve the short-term load forecasting (STLF) performance. The proposed algorithm overcomes the so-called training issue in ANNs, where it traps in local minima, by applying genetic algorithm operations in particle swarm optimization when it converges to local minima. The training ability of the hybridized training algorithm is evaluated using load data gathered by Electricity Generating Authority of Thailand. The ANN is trained using the new training algorithm with one-year data to forecast equal 48 periods of each day in 2013. During the testing phase, a mean absolute percentage error (MAPE) is used to evaluate performance of the hybridized training algorithm and compare them with MAPEs from Backpropagation, GA, and PSO. Yearly average MAPE and the average MAPEs for weekdays, Mondays, weekends, Holidays, and Bridging holidays show that PSO+GA algorithm outperforms other training algorithms for STLF.


2021 ◽  
Vol 10 (6) ◽  
pp. 3377-3384
Author(s):  
Zainab Fouad ◽  
Marco Alfonse ◽  
Mohamed Roushdy ◽  
Abdel-Badeeh M. Salem

Deep neural networks have accomplished enormous progress in tackling many problems. More specifically, convolutional neural network (CNN) is a category of deep networks that have been a dominant technique in computer vision tasks. Despite that these deep neural networks are highly effective; the ideal structure is still an issue that needs a lot of investigation. Deep Convolutional Neural Network model is usually designed manually by trials and repeated tests which enormously constrain its application. Many hyper-parameters of the CNN can affect the model performance. These parameters are depth of the network, numbers of convolutional layers, and numbers of kernels with their sizes. Therefore, it may be a huge challenge to design an appropriate CNN model that uses optimized hyper-parameters and reduces the reliance on manual involvement and domain expertise. In this paper, a design architecture method for CNNs is proposed by utilization of particle swarm optimization (PSO) algorithm to learn the optimal CNN hyper-parameters values. In the experiment, we used Modified National Institute of Standards and Technology (MNIST) database of handwritten digit recognition. The experiments showed that our proposed approach can find an architecture that is competitive to the state-of-the-art models with a testing error of 0.87%.


Sign in / Sign up

Export Citation Format

Share Document