Metaheuristic Optimization of Constrained Large Portfolios using Hybrid Particle Swarm Optimization

2017 ◽  
Vol 8 (1) ◽  
pp. 1-23 ◽  
Author(s):  
G. A. Vijayalakshmi Pai ◽  
Thierry Michel

Classical Particle Swarm Optimization (PSO) that has been attempted for the solution of complex constrained portfolio optimization problem in finance, despite its noteworthy track record, suffers from the perils of getting trapped in local optima yielding inferior solutions and unrealistic time estimates for diversification even in medium level portfolio sets. In this work the authors present the solution of the problem using a hybrid PSO strategy. The global best particle position arrived at by the hybrid PSO now acts as the initial point to the Sequential Quadratic Programming (SQP) algorithm which efficiently obtains the optimal solution for even large portfolio sets. The experimental results of the hybrid PSO-SQP model have been demonstrated over Bombay Stock Exchange, India (BSE200 index, Period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, Period: March 2002-March 2007) data sets, and compared with those obtained by Evolutionary Strategy, which belongs to a different genre.

Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


2019 ◽  
Vol 10 (1) ◽  
pp. 107-131
Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


2021 ◽  
Vol 11 (20) ◽  
pp. 9772
Author(s):  
Xueli Shen ◽  
Daniel C. Ihenacho

The method of searching for an optimal solution inspired by nature is referred to as particle swarm optimization. Differential evolution is a simple but effective EA for global optimization since it has demonstrated strong convergence qualities and is relatively straightforward to comprehend. The primary concerns of design engineers are that the traditional technique used in the design process of a gas cyclone utilizes complex mathematical formulas and a sensitivity approach to obtain relevant optimal design parameters. The motivation of this research effort is based on the desire to simplify complex mathematical models and the sensitivity approach for gas cyclone design with the use of an objective function, which is of the minimization type. The process makes use of the initial population generated by the DE algorithm, and the stopping criterion of DE is set as the fitness value. When the fitness value is not less than the current global best, the DE population is taken over by PSO. For each iteration, the new velocity and position are updated in every generation until the optimal solution is achieved. When using PSO independently, the adoption of a hybridised particle swarm optimization method for the design of an optimum gas cyclone produced better results, with an overall efficiency of 0.70, and with a low cost at the rate of 230 cost/second.


MATEMATIKA ◽  
2018 ◽  
Vol 34 (1) ◽  
pp. 125-141 ◽  
Author(s):  
Kashif Bin Zaheer ◽  
Mohd Ismail Bin Abd Aziz ◽  
Amber Nehan Kashif ◽  
Syed Muhammad Murshid Raza

The selection criteria play an important role in the portfolio optimization using any ratio model. In this paper, the authors have considered the mean return as profit and variance of return as risk on the asset return as selection criteria, as the first stage to optimize the selected portfolio. Furthermore, the sharp ratio (SR) has been considered to be the optimization ratio model. In this regard, the historical data taken from Shanghai Stock Exchange (SSE) has been considered. A metaheuristic technique has been developed, with financial tool box available in MATLAB and the particle swarm optimization (PSO) algorithm. Hence, called as the hybrid particle swarm optimization (HPSO) or can also be called as financial tool box particle swarm optimization (FTB-PSO). In this model, the budgets as constraint, where as two different models i.e. with and without short sale, have been considered. The obtained results have been compared with the existing literature and the proposed technique is found to be optimum and better in terms of profit.


2011 ◽  
Vol 383-390 ◽  
pp. 7208-7213
Author(s):  
De Kun Tan

To overcome the shortage of standard Particle Swarm Optimization(SPSO) on premature convergence, Quantum-behaved Particle Swarm Optimization (QPSO) is presented to solve engineering constrained optimization problem. QPSO algorithm is a novel PSO algorithm model in terms of quantum mechanics. The model is based on Delta potential, and we think the particle has the behavior of quanta. Because the particle doesn’t have a certain trajectory, it has more randomicity than the particle which has fixed path in PSO, thus the QPSO more easily escapes from local optima, and has more capability to seek the global optimal solution. In the period of iterative optimization, outside point method is used to deal with those particles that violate the constraints. Furthermore, compared with other intelligent algorithms, the QPSO is verified by two instances of engineering constrained optimization, experimental results indicate that the algorithm performs better in terms of accuracy and robustness.


2014 ◽  
Vol 926-930 ◽  
pp. 3338-3341
Author(s):  
Hong Mei Ni ◽  
Zhian Yi ◽  
Jin Yue Liu

Chaos is a non-linear phenomenon that widely exists in the nature. Due to the ease of implementation and its special ability to avoid being trapped in local optima, chaos has been a novel optimization technique and chaos-based searching algorithms have aroused intense interests. Many real world optimization problems are dynamic in which global optimum and local optima change over time. Particle swarm optimization has performed well to find and track optima in static environments. When the particle swarm optimization (PSO) algorithm is used in dynamic multi-objective problems, there exist some problems, such as easily falling into prematurely, having slow convergence rate and so on. To solve above problems, a hybrid PSO algorithm based on chaos algorithm is brought forward. The hybrid PSO algorithm not only has the efficient parallelism but also increases the diversity of population because of the chaos algorithm. The simulation result shows that the new algorithm is prior to traditional PSO algorithm, having stronger adaptability and convergence, solving better the question on moving peaks benchmark.


2011 ◽  
Vol 268-270 ◽  
pp. 1188-1193 ◽  
Author(s):  
Zuo Yong Li ◽  
Chun Xue Yu ◽  
Zheng Jian Zhang

In order to avoid premature convergence and improve the precision of solution using basic shuffled frog leaping algorithm (SFLA), based on immune evolutionary particle swarm optimization, a new shuffled frog leaping algorithm was proposed. The proposed algorithm integrated the global search mechanism in the particle swarm optimization (PSO) into SFLA, so as to search thoroughly near by the space gap of the worst solution, and also integrated the immune evolutionary algorithm into SFLA making immune evolutionary iterative computation to the optimal solution in the sub-swarm, so as to use the information of optimal solution fully. This algorithm can not only free from trapping into local optima, but also close to the global optimal solution with the higher precision. Calculation results show that the immune evolutionary particle swarm shuffled frog leaping algorithm (IEPSOSFLA) has the optimal searching ability and stability all the better than those of basic SFLA.


To solve some problems of particle swarm optimization, such as the premature convergence and falling into a sub-optimal solution easily, we introduce the probability initialization strategy and genetic operator into the particle swarm optimization algorithm. Based on the hybrid strategies, we propose a improved hybrid particle swarm optimization, namely IHPSO, for solving the traveling salesman problem. In the IHPSO algorithm, the probability strategy is utilized into population initialization. It can save much more computing resources during the iteration procedure of the algorithm. Furthermore, genetic operators, including two kinds of crossover operator and a directional mutation operator, are used for improving the algorithm’s convergence accuracy and population diversity. At last, the proposed method is benchmarked on 9 benchmark problems in TSPLIB and the results are compared with 4 competitors. From the results, it is observed that the proposed approach significantly outperforms others on most the 9 datasets.


2018 ◽  
Vol 14 (06) ◽  
pp. 203
Author(s):  
Dashe Li ◽  
Dapeng Cheng ◽  
Jihong Qin ◽  
Shue Liu ◽  
Pingping Liu

Internet of Things (IOT) has found broad applications and has drawn more and more attention from researchers. At the same time, IOT also presents many challenges, one of which is node localization, i.e. how to determine the geographical position of each sensor node. Algorithms have been proposed to solve the problem. A popular algorithm is Particle Swarm Optimization (PSO) because it is simple to implement and needs relatively less computation. However, PSO is easily trapped into local optima and gives premature results. In order to improve the PSO algorithm, this paper proposes the EHPSO algorithm based on Novel Particle Swarm Optimization (NPSO) and Hybrid Particle Swarm Optimization (HPSO). The EHPSO algorithm applies the principle of best neighbor of each particle to the HPSO algorithm. Simulation results indicate that EHPSO outperforms HPSO and NPSO in evaluating accurate node positions and improves convergence by avoiding being trapped into local optima.


Author(s):  
Bo Wei ◽  
Ying Xing ◽  
Xuewen Xia ◽  
Ling Gui

To solve some problems of particle swarm optimization, such as the premature convergence and falling into a sub-optimal solution easily, we introduce the probability initialization strategy and genetic operator into the particle swarm optimization algorithm. Based on the hybrid strategies, we propose a improved hybrid particle swarm optimization, namely IHPSO, for solving the traveling salesman problem. In the IHPSO algorithm, the probability strategy is utilized into population initialization. It can save much more computing resources during the iteration procedure of the algorithm. Furthermore, genetic operators, including two kinds of crossover operator and a directional mutation operator, are used for improving the algorithm’s convergence accuracy and population diversity. At last, the proposed method is benchmarked on 9 benchmark problems in TSPLIB and the results are compared with 4 competitors. From the results, it is observed that the proposed approach significantly outperforms others on most the 9 datasets.


Sign in / Sign up

Export Citation Format

Share Document