Parameter Tuning of Hybrid Nature-Inspired Intelligent Metaheuristics for Solving Financial Portfolio Optimization Problems

Author(s):  
Vassilios Vassiliadis ◽  
Georgios Dounias ◽  
Alexandros Tzanetos
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hasan Saribas ◽  
Sinem Kahvecioglu

Purpose This study aims to compare the performance of the conventional and fractional order proportional-integral-derivative (PID and FOPID) controllers tuned with a particle swarm optimization (PSO) and genetic algorithm (GA) for quadrotor control. Design/methodology/approach In this study, the gains of the controllers were tuned using PSO and GA, which are included in the heuristic optimization methods. The tuning processes of the controller’s gains were formulated as optimization problems. While generating the objective functions (cost functions), four different decision criteria were considered separately: integrated summation error (ISE), integrated absolute error, integrated time absolute error and integrated time summation error (ITSE). Findings According to the simulation results and comparison tables that were created, FOPID controllers tuned with PSO performed better performances than PID controllers. In addition, the ITSE criterion returned better results in control of all axes except for altitude control when compared to the other cost functions. In the control of altitude with the PID controller, the ISE criterion showed better performance. Originality/value While a conventional PID controller has three parameters (Kp, Ki, Kd) that need to be tuned, FOPID controllers have two additional parameters (µ). The inclusion of these two extra parameters means more flexibility in the controller design but much more complexity for parameter tuning. This study reveals the potential and effectiveness of PSO and GA in tuning the controller despite the increased number of parameters and complexity.


2020 ◽  
Vol 34 (06) ◽  
pp. 10235-10242
Author(s):  
Mojmir Mutny ◽  
Johannes Kirschner ◽  
Andreas Krause

Bayesian optimization and kernelized bandit algorithms are widely used techniques for sequential black box function optimization with applications in parameter tuning, control, robotics among many others. To be effective in high dimensional settings, previous approaches make additional assumptions, for example on low-dimensional subspaces or an additive structure. In this work, we go beyond the additivity assumption and use an orthogonal projection pursuit regression model, which strictly generalizes additive models. We present a two-stage algorithm motivated by experimental design to first decorrelate the additive components. Subsequently, the bandit optimization benefits from the statistically efficient additive model. Our method provably decorrelates the fully additive model and achieves optimal sublinear simple regret in terms of the number of function evaluations. To prove the rotation recovery, we derive novel concentration inequalities for linear regression on subspaces. In addition, we specifically address the issue of acquisition function optimization and present two domain dependent efficient algorithms. We validate the algorithm numerically on synthetic as well as real-world optimization problems.


2021 ◽  
Author(s):  
Leila Zahedi ◽  
Farid Ghareh Mohammadi ◽  
M. Hadi Amini

Machine learning techniques lend themselves as promising decision-making and analytic tools in a wide range of applications. Different ML algorithms have various hyper-parameters. In order to tailor an ML model towards a specific application, a large number of hyper-parameters should be tuned. Tuning the hyper-parameters directly affects the performance (accuracy and run-time). However, for large-scale search spaces, efficiently exploring the ample number of combinations of hyper-parameters is computationally challenging. Existing automated hyper-parameter tuning techniques suffer from high time complexity. In this paper, we propose HyP-ABC, an automatic innovative hybrid hyper-parameter optimization algorithm using the modified artificial bee colony approach, to measure the classification accuracy of three ML algorithms, namely random forest, extreme gradient boosting, and support vector machine. Compared to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited number of parameters to be tuned, making it worthwhile for real-world hyper-parameter optimization problems. We further compare our proposed HyP-ABC algorithm with state-of-the-art techniques. In order to ensure the robustness of the proposed method, the algorithm takes a wide range of feasible hyper-parameter values, and is tested using a real-world educational dataset.


Author(s):  
Burcu Adıguzel Mercangöz ◽  
Ergun Eroglu

The portfolio optimization is an important research field of the financial sciences. In portfolio optimization problems, it is aimed to create portfolios by giving the best return at a certain risk level from the asset pool or by selecting assets that give the lowest risk at a certain level of return. The diversity of the portfolio gives opportunity to increase the return by minimizing the risk. As a powerful alternative to the mathematical models, heuristics is used widely to solve the portfolio optimization problems. The genetic algorithm (GA) is a technique that is inspired by the biological evolution. While this book considers the heuristics methods for the portfolio optimization problems, this chapter will give the implementing steps of the GA clearly and apply this method to a portfolio optimization problem in a basic example.


2021 ◽  
pp. 231-240
Author(s):  
Mansi Gupta ◽  
Sonali Semwal ◽  
Shivani Bali

2012 ◽  
Vol 3 (1) ◽  
pp. 1-29 ◽  
Author(s):  
Ashwin A. Kadkol ◽  
Gary G. Yen

Real-world optimization problems are often dynamic, multiple objective in nature with various constraints and uncertainties. This work proposes solving such problems by systematic segmentation via heuristic information accumulated through Cultural Algorithms. The problem is tackled by maintaining 1) feasible and infeasible best solutions and their fitness and constraint violations in the Situational Space, 2) objective space bounds for the search in the Normative Space, 3) objective space crowding information in the Topographic Space, and 4) function sensitivity and relocation offsets (to reuse available information on optima upon change of environments) in the Historical Space of a cultural framework. The information is used to vary the flight parameters of the Particle Swarm Optimization, to generate newer individuals and to better track dynamic and multiple optima with constraints. The proposed algorithm is validated on three numerical optimization problems. As a practical application case study that is computationally intensive and complex, parameter tuning of a PID (Proportional–Integral–Derivative) controller for plants with transfer functions that vary with time and imposed with robust optimization criteria has been used to demonstrate the effectiveness and efficiency of the proposed design.


Sign in / Sign up

Export Citation Format

Share Document