A Meta-learning Prediction Model of Algorithm Performance for Continuous Optimization Problems

Author(s):  
Mario A. Muñoz ◽  
Michael Kirley ◽  
Saman K. Halgamuge
Author(s):  
Seyem Mohammad Ashrafi ◽  
Noushin Emami Kourabbaslou

An efficient adaptive version of Melody Search algorithm (EAMS) is introduced in this study, which is a powerful tool to solve optimization problems in continuous domains. Melody search (MS) algorithm is a recent newly improved version of harmony search (HS), while the algorithm performance strongly depends on fine-tuning of its parameters. Although MS is more efficient for solving continuous optimization problems than most of other HS-based algorithms, the large number of algorithm parameters makes it difficult to use. Hence, the main objective in this study is to reduce the number of algorithm parameters and improving its efficiency. To achieve this, a novel improvisation scheme is introduced to generate new solutions, a useful procedure is developed to determine the possible variable ranges in different iterations and an adaptive strategy is employed to calculate proper parameters' values and choose suitable memory consideration rules during the evolution process. Extensive computational comparisons are carried out by employing a set of eighteen well-known benchmark optimization problems with various characteristics from the literature. The obtained results reveal that EAMS algorithm can achieve better solutions compared to some other HS variants, basic MS algorithms and certain cases of well-known robust optimization algorithms.


2015 ◽  
Vol 6 (3) ◽  
pp. 1-37 ◽  
Author(s):  
Seyem Mohammad Ashrafi ◽  
Noushin Emami Kourabbaslou

An efficient adaptive version of Melody Search algorithm (EAMS) is introduced in this study, which is a powerful tool to solve optimization problems in continuous domains. Melody search (MS) algorithm is a recent newly improved version of harmony search (HS), while the algorithm performance strongly depends on fine-tuning of its parameters. Although MS is more efficient for solving continuous optimization problems than most of other HS-based algorithms, the large number of algorithm parameters makes it difficult to use. Hence, the main objective in this study is to reduce the number of algorithm parameters and improving its efficiency. To achieve this, a novel improvisation scheme is introduced to generate new solutions, a useful procedure is developed to determine the possible variable ranges in different iterations and an adaptive strategy is employed to calculate proper parameters' values and choose suitable memory consideration rules during the evolution process. Extensive computational comparisons are carried out by employing a set of eighteen well-known benchmark optimization problems with various characteristics from the literature. The obtained results reveal that EAMS algorithm can achieve better solutions compared to some other HS variants, basic MS algorithms and certain cases of well-known robust optimization algorithms.


2015 ◽  
Vol 137 (7) ◽  
Author(s):  
Jong-Chen Chen

Continuous optimization plays an increasingly significant role in everyday decision-making situations. Our group had previously developed a multilevel system called the artificial neuromolecular system (ANM) that possessed structure richness allowing variation and/or selection operators to act on it in order to generate a broad range of dynamic behaviors. In this paper, we used the ANM system to control the motions of a wooden walking robot named Miky. The robot was used to investigate the ANM system's capability to deal with continuous optimization problems through self-organized learning. Evolutionary learning algorithm was used to train the system and generate appropriate control. The experimental results showed that Miky was capable of learning in a continued manner in a physical environment. A further experiment was conducted by making some changes to Miky's physical structure in order to observe the system's capability to deal with the change. Detailed analysis of the experimental results showed that Miky responded to the change by appropriately adjusting its leg movements in space and time. The results showed that the ANM system possessed continuous optimization capability in coping with the change. Our findings from the empirical experiments might provide us another dimension of information of how to design an intelligent system comparatively friendlier than the traditional systems in assisting humans to walk.


2020 ◽  
Vol 34 (05) ◽  
pp. 7111-7118
Author(s):  
Moumita Choudhury ◽  
Saaduddin Mahmud ◽  
Md. Mosaddek Khan

Distributed Constraint Optimization Problems (DCOPs) are a widely studied constraint handling framework. The objective of a DCOP algorithm is to optimize a global objective function that can be described as the aggregation of several distributed constraint cost functions. In a DCOP, each of these functions is defined by a set of discrete variables. However, in many applications, such as target tracking or sleep scheduling in sensor networks, continuous valued variables are more suited than the discrete ones. Considering this, Functional DCOPs (F-DCOPs) have been proposed that can explicitly model a problem containing continuous variables. Nevertheless, state-of-the-art F-DCOPs approaches experience onerous memory or computation overhead. To address this issue, we propose a new F-DCOP algorithm, namely Particle Swarm based F-DCOP (PFD), which is inspired by a meta-heuristic, Particle Swarm Optimization (PSO). Although it has been successfully applied to many continuous optimization problems, the potential of PSO has not been utilized in F-DCOPs. To be exact, PFD devises a distributed method of solution construction while significantly reducing the computation and memory requirements. Moreover, we theoretically prove that PFD is an anytime algorithm. Finally, our empirical results indicate that PFD outperforms the state-of-the-art approaches in terms of solution quality and computation overhead.


Symmetry ◽  
2018 ◽  
Vol 10 (8) ◽  
pp. 337 ◽  
Author(s):  
Chui-Yu Chiu ◽  
Po-Chou Shih ◽  
Xuechao Li

A novel global harmony search (NGHS) algorithm, as proposed in 2010, is an improved algorithm that combines the harmony search (HS), particle swarm optimization (PSO), and a genetic algorithm (GA). Moreover, the fixed parameter of mutation probability was used in the NGHS algorithm. However, appropriate parameters can enhance the searching ability of a metaheuristic algorithm, and their importance has been described in many studies. Inspired by the adjustment strategy of the improved harmony search (IHS) algorithm, a dynamic adjusting novel global harmony search (DANGHS) algorithm, which combines NGHS and dynamic adjustment strategies for genetic mutation probability, is introduced in this paper. Moreover, extensive computational experiments and comparisons are carried out for 14 benchmark continuous optimization problems. The results show that the proposed DANGHS algorithm has better performance in comparison with other HS algorithms in most problems. In addition, the proposed algorithm is more efficient than previous methods. Finally, different strategies are suitable for different situations. Among these strategies, the most interesting and exciting strategy is the periodic dynamic adjustment strategy. For a specific problem, the periodic dynamic adjustment strategy could have better performance in comparison with other decreasing or increasing strategies. These results inspire us to further investigate this kind of periodic dynamic adjustment strategy in future experiments.


Sign in / Sign up

Export Citation Format

Share Document