scholarly journals Solving Bi-Matrix Games Based on Fuzzy Payoffs via Utilizing the Interval Value Function Method

Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 469
Author(s):  
Kaisheng Liu ◽  
Yumei Xing

In this article, we introduce a model of bi-matrix games based on crisp parametric payoffs via utilizing the method of interval value function. Then, we get that equilibrium solutions of bi-matrix games on the basis of fuzzy payoffs and equilibrium solutions of the game model are of equal value. Furthermore, it is concluded that equilibrium solutions of the game can be converted to optimal solutions of discrete nonlinear optimization problems with parameters. Lastly, the proposed methodology is illustrated by an example.

Author(s):  
Kaisheng Liu ◽  
Yumei Xing

This article puts forward the bi-matrix games with crisp parametric payoffs based on interval value function approach. We conclude that the equilibrium solution of the game model can converted into optimal solutions of the pair of the non-linear optimization problem. Finally, experiment results show the efficiency of the model.


2015 ◽  
Vol 2015 ◽  
pp. 1-16
Author(s):  
Lei Fan ◽  
Yuping Wang ◽  
Xiyang Liu ◽  
Liping Jia

Auxiliary function methods provide us effective and practical ideas to solve multimodal optimization problems. However, improper parameter settings often cause troublesome effects which might lead to the failure of finding global optimal solutions. In this paper, a minimum-elimination-escape function method is proposed for multimodal optimization problems, aiming at avoiding the troublesome “Mexican hat” effect and reducing the influence of local optimal solutions. In the proposed method, the minimum-elimination function is constructed to decrease the number of local optimum first. Then, a minimum-escape function is proposed based on the minimum-elimination function, in which the current minimal solution will be converted to the unique global maximal solution of the minimum-escape function. The minimum-escape function is insensitive to its unique but easy to adopt parameter. At last, an minimum-elimination-escape function method is designed based on these two functions. Experiments on 19 widely used benchmarks are made, in which influences of the parameter and different initial points are analyzed. Comparisons with 11 existing methods indicate that the performance of the proposed algorithm is positive and effective.


2018 ◽  
Vol 18 (02) ◽  
pp. 175-183 ◽  
Author(s):  
Dong Qiu ◽  
Yumei Xing ◽  
Shuqiao Chen

1999 ◽  
Vol 9 (3) ◽  
pp. 755-778 ◽  
Author(s):  
Paul T. Boggs ◽  
Anthony J. Kearsley ◽  
Jon W. Tolle

Author(s):  
M. Hoffhues ◽  
W. Römisch ◽  
T. M. Surowiec

AbstractThe vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.


2021 ◽  
Vol 12 (4) ◽  
pp. 81-100
Author(s):  
Yao Peng ◽  
Zepeng Shen ◽  
Shiqi Wang

Multimodal optimization problem exists in multiple global and many local optimal solutions. The difficulty of solving these problems is finding as many local optimal peaks as possible on the premise of ensuring global optimal precision. This article presents adaptive grouping brainstorm optimization (AGBSO) for solving these problems. In this article, adaptive grouping strategy is proposed for achieving adaptive grouping without providing any prior knowledge by users. For enhancing the diversity and accuracy of the optimal algorithm, elite reservation strategy is proposed to put central particles into an elite pool, and peak detection strategy is proposed to delete particles far from optimal peaks in the elite pool. Finally, this article uses testing functions with different dimensions to compare the convergence, accuracy, and diversity of AGBSO with BSO. Experiments verify that AGBSO has great localization ability for local optimal solutions while ensuring the accuracy of the global optimal solutions.


Sign in / Sign up

Export Citation Format

Share Document