A parallel reinforcement computing model for function optimization problems

Author(s):  
F. Qian ◽  
S. Ikebou ◽  
T. Kusunoki ◽  
J. Wu ◽  
H. Hirata
2021 ◽  
pp. 1-15
Author(s):  
Jinding Gao

In order to solve some function optimization problems, Population Dynamics Optimization Algorithm under Microbial Control in Contaminated Environment (PDO-MCCE) is proposed by adopting a population dynamics model with microbial treatment in a polluted environment. In this algorithm, individuals are automatically divided into normal populations and mutant populations. The number of individuals in each category is automatically calculated and adjusted according to the population dynamics model, it solves the problem of artificially determining the number of individuals. There are 7 operators in the algorithm, they realize the information exchange between individuals the information exchange within and between populations, the information diffusion of strong individuals and the transmission of environmental information are realized to individuals, the number of individuals are increased or decreased to ensure that the algorithm has global convergence. The periodic increase of the number of individuals in the mutant population can greatly increase the probability of the search jumping out of the local optimal solution trap. In the iterative calculation, the algorithm only deals with 3/500∼1/10 of the number of individual features at a time, the time complexity is reduced greatly. In order to assess the scalability, efficiency and robustness of the proposed algorithm, the experiments have been carried out on realistic, synthetic and random benchmarks with different dimensions. The test case shows that the PDO-MCCE algorithm has better performance and is suitable for solving some optimization problems with higher dimensions.


2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Feng Zou ◽  
Debao Chen ◽  
Jiangtao Wang

An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher’s behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.


2020 ◽  
Vol 34 (06) ◽  
pp. 10235-10242
Author(s):  
Mojmir Mutny ◽  
Johannes Kirschner ◽  
Andreas Krause

Bayesian optimization and kernelized bandit algorithms are widely used techniques for sequential black box function optimization with applications in parameter tuning, control, robotics among many others. To be effective in high dimensional settings, previous approaches make additional assumptions, for example on low-dimensional subspaces or an additive structure. In this work, we go beyond the additivity assumption and use an orthogonal projection pursuit regression model, which strictly generalizes additive models. We present a two-stage algorithm motivated by experimental design to first decorrelate the additive components. Subsequently, the bandit optimization benefits from the statistically efficient additive model. Our method provably decorrelates the fully additive model and achieves optimal sublinear simple regret in terms of the number of function evaluations. To prove the rotation recovery, we derive novel concentration inequalities for linear regression on subspaces. In addition, we specifically address the issue of acquisition function optimization and present two domain dependent efficient algorithms. We validate the algorithm numerically on synthetic as well as real-world optimization problems.


Author(s):  
Takashi KUSUNOKI ◽  
Shigeya IKEBOU ◽  
Jijun WU ◽  
Yue ZHAO ◽  
Fei QIAN

2017 ◽  
Vol 1 (2) ◽  
pp. 82 ◽  
Author(s):  
Tirana Noor Fatyanosa ◽  
Andreas Nugroho Sihananto ◽  
Gusti Ahmad Fanshuri Alfarisy ◽  
M Shochibul Burhan ◽  
Wayan Firdaus Mahmudy

The optimization problems on real-world usually have non-linear characteristics. Solving non-linear problems is time-consuming, thus heuristic approaches usually are being used to speed up the solution’s searching. Among of the heuristic-based algorithms, Genetic Algorithm (GA) and Simulated Annealing (SA) are two among most popular. The GA is powerful to get a nearly optimal solution on the broad searching area while SA is useful to looking for a solution in the narrow searching area. This study is comparing performance between GA, SA, and three types of Hybrid GA-SA to solve some non-linear optimization cases. The study shows that Hybrid GA-SA can enhance GA and SA to provide a better result


Sign in / Sign up

Export Citation Format

Share Document