scholarly journals Hydrological Cycle Algorithm for Continuous Optimization Problems

2017 ◽  
Vol 2017 ◽  
pp. 1-25 ◽  
Author(s):  
Ahmad Wedyan ◽  
Jacqueline Whalley ◽  
Ajit Narayanan

A new nature-inspired optimization algorithm called the Hydrological Cycle Algorithm (HCA) is proposed based on the continuous movement of water in nature. In the HCA, a collection of water drops passes through various hydrological water cycle stages, such as flow, evaporation, condensation, and precipitation. Each stage plays an important role in generating solutions and avoiding premature convergence. The HCA shares information by direct and indirect communication among the water drops, which improves solution quality. Similarities and differences between HCA and other water-based algorithms are identified, and the implications of these differences on overall performance are discussed. A new topological representation for problems with a continuous domain is proposed. In proof-of-concept experiments, the HCA is applied on a variety of benchmarked continuous numerical functions. The results were found to be competitive in comparison to a number of other algorithms and validate the effectiveness of HCA. Also demonstrated is the ability of HCA to escape from local optima solutions and converge to global solutions. Thus, HCA provides an alternative approach to tackling various types of multimodal continuous optimization problems as well as an overall framework for water-based particle algorithms in general.

2020 ◽  
Vol 34 (05) ◽  
pp. 7111-7118
Author(s):  
Moumita Choudhury ◽  
Saaduddin Mahmud ◽  
Md. Mosaddek Khan

Distributed Constraint Optimization Problems (DCOPs) are a widely studied constraint handling framework. The objective of a DCOP algorithm is to optimize a global objective function that can be described as the aggregation of several distributed constraint cost functions. In a DCOP, each of these functions is defined by a set of discrete variables. However, in many applications, such as target tracking or sleep scheduling in sensor networks, continuous valued variables are more suited than the discrete ones. Considering this, Functional DCOPs (F-DCOPs) have been proposed that can explicitly model a problem containing continuous variables. Nevertheless, state-of-the-art F-DCOPs approaches experience onerous memory or computation overhead. To address this issue, we propose a new F-DCOP algorithm, namely Particle Swarm based F-DCOP (PFD), which is inspired by a meta-heuristic, Particle Swarm Optimization (PSO). Although it has been successfully applied to many continuous optimization problems, the potential of PSO has not been utilized in F-DCOPs. To be exact, PFD devises a distributed method of solution construction while significantly reducing the computation and memory requirements. Moreover, we theoretically prove that PFD is an anytime algorithm. Finally, our empirical results indicate that PFD outperforms the state-of-the-art approaches in terms of solution quality and computation overhead.


Author(s):  
Aijia Ouyang ◽  
Xuyu Peng ◽  
Yanbin Liu ◽  
Lilue Fan ◽  
Kenli Li

When used for optimizing complex functions, harmony search (HS) and shuffled frog leaping algorithm (SFLA) algorithm tend to easily get trapped into local optima and result in low convergence precision. To overcome such shortcomings, a hybrid mechanism of selective search by combining HS algorithm and SFLA algorithm is as well proposed. An HS-SFLA algorithm is designed by taking the advantages of HS and SFLA algorithms. The hybrid algorithm of HS-SFLA is adopted for dealing with complex function optimization problems, the experimental results show that HS-SFLA outperforms other state-of-the-art intelligence algorithms significantly in terms of global search ability, convergence speed and robustness on 80% of the benchmark functions tested. The HS-SFLA algorithm could directly be applied to all kinds of continuous optimization problems in the real world.


2015 ◽  
Vol 2015 ◽  
pp. 1-21 ◽  
Author(s):  
Qamar Abbas ◽  
Jamil Ahmad ◽  
Hajira Jabeen

Differential evolution (DE) is a powerful global optimization algorithm which has been studied intensively by many researchers in the recent years. A number of variants have been established for the algorithm that makes DE more applicable. However, most of the variants are suffering from the problems of convergence speed and local optima. A novel tournament based parent selection variant of DE algorithm is proposed in this research. The proposed variant enhances searching capability and improves convergence speed of DE algorithm. This paper also presents a novel statistical comparison of existing DE mutation variants which categorizes these variants in terms of their overall performance. Experimental results show that the proposed DE variant has significance performance over other DE mutation variants.


2015 ◽  
Vol 137 (7) ◽  
Author(s):  
Jong-Chen Chen

Continuous optimization plays an increasingly significant role in everyday decision-making situations. Our group had previously developed a multilevel system called the artificial neuromolecular system (ANM) that possessed structure richness allowing variation and/or selection operators to act on it in order to generate a broad range of dynamic behaviors. In this paper, we used the ANM system to control the motions of a wooden walking robot named Miky. The robot was used to investigate the ANM system's capability to deal with continuous optimization problems through self-organized learning. Evolutionary learning algorithm was used to train the system and generate appropriate control. The experimental results showed that Miky was capable of learning in a continued manner in a physical environment. A further experiment was conducted by making some changes to Miky's physical structure in order to observe the system's capability to deal with the change. Detailed analysis of the experimental results showed that Miky responded to the change by appropriately adjusting its leg movements in space and time. The results showed that the ANM system possessed continuous optimization capability in coping with the change. Our findings from the empirical experiments might provide us another dimension of information of how to design an intelligent system comparatively friendlier than the traditional systems in assisting humans to walk.


Symmetry ◽  
2018 ◽  
Vol 10 (8) ◽  
pp. 337 ◽  
Author(s):  
Chui-Yu Chiu ◽  
Po-Chou Shih ◽  
Xuechao Li

A novel global harmony search (NGHS) algorithm, as proposed in 2010, is an improved algorithm that combines the harmony search (HS), particle swarm optimization (PSO), and a genetic algorithm (GA). Moreover, the fixed parameter of mutation probability was used in the NGHS algorithm. However, appropriate parameters can enhance the searching ability of a metaheuristic algorithm, and their importance has been described in many studies. Inspired by the adjustment strategy of the improved harmony search (IHS) algorithm, a dynamic adjusting novel global harmony search (DANGHS) algorithm, which combines NGHS and dynamic adjustment strategies for genetic mutation probability, is introduced in this paper. Moreover, extensive computational experiments and comparisons are carried out for 14 benchmark continuous optimization problems. The results show that the proposed DANGHS algorithm has better performance in comparison with other HS algorithms in most problems. In addition, the proposed algorithm is more efficient than previous methods. Finally, different strategies are suitable for different situations. Among these strategies, the most interesting and exciting strategy is the periodic dynamic adjustment strategy. For a specific problem, the periodic dynamic adjustment strategy could have better performance in comparison with other decreasing or increasing strategies. These results inspire us to further investigate this kind of periodic dynamic adjustment strategy in future experiments.


Sign in / Sign up

Export Citation Format

Share Document