scholarly journals Hierarchical Population Game Models of Coevolution in Multi-Criteria Optimization Problems under Uncertainty

2021 ◽  
Vol 11 (14) ◽  
pp. 6563
Author(s):  
Vladimir A. Serov

The article develops hierarchical population game models of co-evolutionary algorithms for solving the problem of multi-criteria optimization under uncertainty. The principles of vector minimax and vector minimax risk are used as the basic principles of optimality for the problem of multi-criteria optimization under uncertainty. The concept of equilibrium of a hierarchical population game with the right of the first move is defined. The necessary conditions are formulated under which the equilibrium solution of a hierarchical population game is a discrete approximation of the set of optimal solutions to the multi-criteria optimization problem under uncertainty.


2016 ◽  
Vol 685 ◽  
pp. 142-147
Author(s):  
Vladimir Gorbunov ◽  
Elena Sinyukova

In this paper the authors describe necessary conditions of optimality for continuous multicriteria optimization problems. It is proved that the existence of effective solutions requires that the gradients of individual criteria were linearly dependent. The set of solutions is given by system of equations. It is shown that for finding necessary and sufficient conditions for multicriteria optimization problems, it is necessary to switch to the single-criterion optimization problem with the objective function, which is the convolution of individual criteria. These results are consistent with non-linear optimization problems with equality constraints. An example can be the study of optimal solutions obtained by the method of the main criterion for Pareto optimality.



Author(s):  
M. Hoffhues ◽  
W. Römisch ◽  
T. M. Surowiec

AbstractThe vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.



2021 ◽  
Vol 12 (4) ◽  
pp. 81-100
Author(s):  
Yao Peng ◽  
Zepeng Shen ◽  
Shiqi Wang

Multimodal optimization problem exists in multiple global and many local optimal solutions. The difficulty of solving these problems is finding as many local optimal peaks as possible on the premise of ensuring global optimal precision. This article presents adaptive grouping brainstorm optimization (AGBSO) for solving these problems. In this article, adaptive grouping strategy is proposed for achieving adaptive grouping without providing any prior knowledge by users. For enhancing the diversity and accuracy of the optimal algorithm, elite reservation strategy is proposed to put central particles into an elite pool, and peak detection strategy is proposed to delete particles far from optimal peaks in the elite pool. Finally, this article uses testing functions with different dimensions to compare the convergence, accuracy, and diversity of AGBSO with BSO. Experiments verify that AGBSO has great localization ability for local optimal solutions while ensuring the accuracy of the global optimal solutions.



2011 ◽  
Vol 421 ◽  
pp. 559-563
Author(s):  
Yong Chao Gao ◽  
Li Mei Liu ◽  
Heng Qian ◽  
Ding Wang

The scale and complexity of search space are important factors deciding the solving difficulty of an optimization problem. The information of solution space may lead searching to optimal solutions. Based on this, an algorithm for combinatorial optimization is proposed. This algorithm makes use of the good solutions found by intelligent algorithms, contracts the search space and partitions it into one or several optimal regions by backbones of combinatorial optimization solutions. And optimization of small-scale problems is carried out in optimal regions. Statistical analysis is not necessary before or through the solving process in this algorithm, and solution information is used to estimate the landscape of search space, which enhances the speed of solving and solution quality. The algorithm breaks a new path for solving combinatorial optimization problems, and the results of experiments also testify its efficiency.



Author(s):  
Kaisheng Liu ◽  
Yumei Xing

This article puts forward the bi-matrix games with crisp parametric payoffs based on interval value function approach. We conclude that the equilibrium solution of the game model can converted into optimal solutions of the pair of the non-linear optimization problem. Finally, experiment results show the efficiency of the model.



Author(s):  
Rudy Chocat ◽  
Loïc Brevault ◽  
Mathieu Balesdent ◽  
Sébastien Defoort

The design of complex systems often induces a constrained optimization problem under uncertainty. An adaptation of CMA-ES(λ, μ) optimization algorithm is proposed in order to efficiently handle the constraints in the presence of noise. The update mechanisms of the parametrized distribution used to generate the candidate solutions are modified. The constraint handling method allows to reduce the semi-principal axes of the probable research ellipsoid in the directions violating the constraints. The proposed approach is compared to existing approaches on three analytic optimization problems to highlight the efficiency and the robustness of the algorithm. The proposed method is used to design a two stage solid propulsion launch vehicle.



2012 ◽  
Vol 28 (1) ◽  
pp. 133-141
Author(s):  
EMILIA-LOREDANA POP ◽  
◽  
DOREL I. DUCA ◽  

In this paper, we attach to the optimization problem ... where X is a subset of Rn, f : X → R, g = (g1, ..., gm) : X → Rm and h = (h1, ..., hq) : X → Rq are functions, the (0, 1) − η− approximated optimization problem (AP). We will study the connections between the optimal solutions for Problem (AP), the saddle points for Problem (AP), optimal solutions for Problem (P) and saddle points for Problem (P).



2018 ◽  
Vol 34 (1) ◽  
pp. 01-07
Author(s):  
TADEUSZ ANTCZAK ◽  

In this paper, a new approximation method for a characterization of optimal solutions in a class of nonconvex differentiable optimization problems is introduced. In this method, an auxiliary optimization problem is constructed for the considered nonconvex extremum problem. The equivalence between optimal solutions in the considered differentiable extremum problem and its approximated optimization problem is established under (Φ, ρ)-invexity hypotheses.



2012 ◽  
Vol 20 (1) ◽  
pp. 27-62 ◽  
Author(s):  
Kalyanmoy Deb ◽  
Amit Saha

In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.



2017 ◽  
Vol 27 (2) ◽  
pp. 153-167 ◽  
Author(s):  
M. Dhingra ◽  
C.S. Lalitha

In this paper we introduce a notion of minimal solutions for set-valued optimization problem in terms of improvement sets, by unifying a solution notion, introduced by Kuroiwa [15] for set-valued problems, and a notion of optimal solutions in terms of improvement sets, introduced by Chicco et al. [4] for vector optimization problems. We provide existence theorems for these solutions, and establish lower convergence of the minimal solution sets in the sense of Painlev?-Kuratowski.



Sign in / Sign up

Export Citation Format

Share Document