Analytical solutions of a class of matrix function optimization problems with unitary constraints

2021 ◽  
pp. 107601
Author(s):  
Weijie Shen ◽  
Ping Shi ◽  
Zhihao Ge ◽  
Weiwei Xu
2021 ◽  
pp. 1-15
Author(s):  
Jinding Gao

In order to solve some function optimization problems, Population Dynamics Optimization Algorithm under Microbial Control in Contaminated Environment (PDO-MCCE) is proposed by adopting a population dynamics model with microbial treatment in a polluted environment. In this algorithm, individuals are automatically divided into normal populations and mutant populations. The number of individuals in each category is automatically calculated and adjusted according to the population dynamics model, it solves the problem of artificially determining the number of individuals. There are 7 operators in the algorithm, they realize the information exchange between individuals the information exchange within and between populations, the information diffusion of strong individuals and the transmission of environmental information are realized to individuals, the number of individuals are increased or decreased to ensure that the algorithm has global convergence. The periodic increase of the number of individuals in the mutant population can greatly increase the probability of the search jumping out of the local optimal solution trap. In the iterative calculation, the algorithm only deals with 3/500∼1/10 of the number of individual features at a time, the time complexity is reduced greatly. In order to assess the scalability, efficiency and robustness of the proposed algorithm, the experiments have been carried out on realistic, synthetic and random benchmarks with different dimensions. The test case shows that the PDO-MCCE algorithm has better performance and is suitable for solving some optimization problems with higher dimensions.


Author(s):  
Om P. Agrawal ◽  
M. Mehedi Hasan ◽  
X. W. Tangpong

Fractional derivatives (FDs) or derivatives of arbitrary order have been used in many applications, and it is envisioned that in the future they will appear in many functional minimization problems of practical interest. Since fractional derivatives have such properties as being non-local, it can be extremely challenging to find analytical solutions for fractional parametric optimization problems, and in many cases, analytical solutions may not exist. Therefore, it is of great importance to develop numerical methods for such problems. This paper presents a numerical scheme for a linear functional minimization problem that involves FD terms. The FD is defined in terms of the Riemann-Liouville definition; however, the scheme will also apply to Caputo derivatives, as well as other definitions of fractional derivatives. In this scheme, the spatial domain is discretized into several subdomains and 2-node one-dimensional linear elements are adopted to approximate the solution and its fractional derivative at point within the domain. The fractional optimization problem is converted to an eigenvalue problem, the solution of which leads to fractional orthogonal functions. Convergence study of the number of elements and error analysis of the results ensure that the algorithm yields stable results. Various fractional orders of derivative are considered, and as the order approaches the integer value of 1, the solution recovers the analytical result for the corresponding integer order problem.


2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Feng Zou ◽  
Debao Chen ◽  
Jiangtao Wang

An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher’s behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.


2020 ◽  
Vol 34 (06) ◽  
pp. 10235-10242
Author(s):  
Mojmir Mutny ◽  
Johannes Kirschner ◽  
Andreas Krause

Bayesian optimization and kernelized bandit algorithms are widely used techniques for sequential black box function optimization with applications in parameter tuning, control, robotics among many others. To be effective in high dimensional settings, previous approaches make additional assumptions, for example on low-dimensional subspaces or an additive structure. In this work, we go beyond the additivity assumption and use an orthogonal projection pursuit regression model, which strictly generalizes additive models. We present a two-stage algorithm motivated by experimental design to first decorrelate the additive components. Subsequently, the bandit optimization benefits from the statistically efficient additive model. Our method provably decorrelates the fully additive model and achieves optimal sublinear simple regret in terms of the number of function evaluations. To prove the rotation recovery, we derive novel concentration inequalities for linear regression on subspaces. In addition, we specifically address the issue of acquisition function optimization and present two domain dependent efficient algorithms. We validate the algorithm numerically on synthetic as well as real-world optimization problems.


Author(s):  
Takashi KUSUNOKI ◽  
Shigeya IKEBOU ◽  
Jijun WU ◽  
Yue ZHAO ◽  
Fei QIAN

Sign in / Sign up

Export Citation Format

Share Document