On one Saddle Point Search Algorithm for Continuous Linear Games as Applied to Information Security Problems

Author(s):  
A.Yu. Bykov ◽  
I.A. Krygin ◽  
M.V. Grishunin ◽  
I.А. Markova

The paper introduces a game formulation of the problem of two players: the defender determines the security levels of objects, and the attacker determines the objects for attack. Each of them distributes his resources between the objects. The assessment of a possible damage to the defender serves as an indicator of quality. The problem of a continuous zero-sum game under constraints on the resources of the players is formulated so that each player must solve his own linear programming problem with a fixed solution of the other player. The purpose of this research was to develop an algorithm for finding a saddle point. The algorithm is approximate and based on reducing a continuous problem to discrete or matrix games of high dimension, since the optimal solutions are located at the vertices or on the faces of the simplices which determine the sets of players' admissible solutions, and the number of vertices or faces of the simplices is finite. In the proposed algorithm, the optimization problems of the players are sequentially solved with the accumulated averaged solution of the other player, in fact, the ideas of the Brown --- Robinson method are used. An example of solving the problem is also given. The paper studies the dependences of the number of algorithm steps on the relative error of the quality indicator and on the dimension of the problem, i.e., the number of protected objects, for a given relative error. The initial data are generated using pseudo-random number generators

Author(s):  
A.Yu. Bykov ◽  
M.V. Grishunin ◽  
I.A. Krygin

This paper deals with a continuous zero-sum game with constraints on resources between a defender allocating resources for protection of sites and an attacker choosing sites for attack. The problem is formulated so that each player would have to solve its own linear program with a fixed solution of the other player. We show that in this case the saddle point is located on the faces of simplices defining feasible solutions. We propose an algorithm of saddle point search based on search of the simplices' faces on hyperplanes of equal dimension. Each possible face is defined using a boolean vector defining states of variables and problem constraints. The search of faces is reduced to the search of feasible boolean vectors. In order to reduce computational complexity of the search we formulate the rules for removing patently unfeasible faces. Each point of a face belonging to an (m--1)-dimensional hyperplane is defined using m points of the hyperplane. We created an algorithm for generating these points. Two systems of linear equations must be solved in order to find the saddle point if it located on the faces of simplices belonging to hyperplanes of equal dimension. We created a generic algorithm of saddle point search on the faces located on hyperplanes of equal dimension. We present an example of solving a problem and the results of computational experiments


2014 ◽  
Vol 989-994 ◽  
pp. 2532-2535
Author(s):  
Hong Gang Xia ◽  
Qing Zhou Wang

This paper presents a modified harmony search (MHS) algorithm for solving numerical optimization problems. MHS employs a novel self-learning strategy for generating new solution vectors that enhances accuracy and convergence rate of harmony search (HS) algorithm. In the proposed MHS algorithm, the harmony memory consideration rate (HMCR) is dynamically adapted to the changing of objective function value in the current harmony memory. The other two key parameters PAR and bw adjust dynamically with generation number. Based on a large number of experiments, MHS has demonstrated stronger convergence and stability than original harmony search (HS) algorithm and its two improved algorithms (IHS and GHS).


Author(s):  
Pei Cao ◽  
Zhaoyan Fan ◽  
Robert Gao ◽  
Jiong Tang

Multi-objective optimization problems are frequently encountered in engineering analyses. Optimization techniques in practical applications are devised and evaluated mostly for specific problems, and thus may not be generally applicable when applications vary. In this study we formulate a probability matching based hyper-heuristic scheme, then propose four low-level heuristics which can work coherently with the single point search algorithm MOSA/R (Multi-Objective Simulated Annealing Algorithm based on Re-pick) towards multi-objective optimization problems of various properties, namely DTLZ and UF test instances. Making use of the domination amount, crowding distance and hypervolume calculations, the hyper-heuristic scheme could meet different optimization requirements. The approach developed (MOSA/R-HH) exhibits better and more robust performance compared to AMOSA, NSGA-II and MOEA/D as illustrated in the numerical tests. The outcome of this research may potentially benefit various design and manufacturing practices.


Author(s):  
Lijuan He ◽  
Yan Wang

Simulating phase transformation of materials at the atomistic scale requires the knowledge of saddle points on the potential energy surface (PES). In the existing first-principles saddle point search methods, the requirement of a large number of expensive evaluations of potential energy, e.g. using density functional theory (DFT), limits the application of such algorithms to large systems. Thus, it is meaningful to minimize the number of functional evaluations as DFT simulations during the search process. Furthermore, model-form uncertainty and numerical errors are inherent in DFT and search algorithms. Robustness of the search results should be considered. In this paper, a new search algorithm based on Kriging is presented to search local minima and saddle points on a PES efficiently and robustly. Different from existing searching methods, the algorithm keeps a memory of searching history by constructing surrogate models and uses the search results on the surrogate models to provide the guidance of future search on the PES. The surrogate model is also updated with more DFT simulation results. The algorithm is demonstrated by the examples of Rastrigin and Schwefel functions with a multitude of minima and saddle points.


2020 ◽  
Vol 30 (1) ◽  
pp. 1-17
Author(s):  
Iyad Abu Doush ◽  
Eugene Santos

Abstract Harmony Search Algorithm (HSA) is an evolutionary algorithm which mimics the process of music improvisation to obtain a nice harmony. The algorithm has been successfully applied to solve optimization problems in different domains. A significant shortcoming of the algorithm is inadequate exploitation when trying to solve complex problems. The algorithm relies on three operators for performing improvisation: memory consideration, pitch adjustment, and random consideration. In order to improve algorithm efficiency, we use roulette wheel and tournament selection in memory consideration, replace the pitch adjustment and random consideration with a modified polynomial mutation, and enhance the obtained new harmony with a modified β-hill climbing algorithm. Such modification can help to maintain the diversity and enhance the convergence speed of the modified HS algorithm. β-hill climbing is a recently introduced local search algorithm that is able to effectively solve different optimization problems. β-hill climbing is utilized in the modified HS algorithm as a local search technique to improve the generated solution by HS. Two algorithms are proposed: the first one is called PHSβ–HC and the second one is called Imp. PHSβ–HC. The two algorithms are evaluated using 13 global optimization classical benchmark function with various ranges and complexities. The proposed algorithms are compared against five other HSA using the same test functions. Using Friedman test, the two proposed algorithms ranked 2nd (Imp. PHSβ–HC) and 3rd (PHSβ–HC). Furthermore, the two proposed algorithms are compared against four versions of particle swarm optimization (PSO). The results show that the proposed PHSβ–HC algorithm generates the best results for three test functions. In addition, the proposed Imp. PHSβ–HC algorithm is able to overcome the other algorithms for two test functions. Finally, the two proposed algorithms are compared with four variations of differential evolution (DE). The proposed PHSβ–HC algorithm produces the best results for three test functions, and the proposed Imp. PHSβ–HC algorithm outperforms the other algorithms for two test functions. In a nutshell, the two modified HSA are considered as an efficient extension to HSA which can be used to solve several optimization applications in the future.


Automatica ◽  
2016 ◽  
Vol 69 ◽  
pp. 150-156 ◽  
Author(s):  
Shaunak D. Bopardikar ◽  
Cédric Langbort
Keyword(s):  

2019 ◽  
Vol 2 (3) ◽  
pp. 508-517
Author(s):  
FerdaNur Arıcı ◽  
Ersin Kaya

Optimization is a process to search the most suitable solution for a problem within an acceptable time interval. The algorithms that solve the optimization problems are called as optimization algorithms. In the literature, there are many optimization algorithms with different characteristics. The optimization algorithms can exhibit different behaviors depending on the size, characteristics and complexity of the optimization problem. In this study, six well-known population based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA and particle swarm optimization - PSO) were used. These six algorithms were performed on the CEC’17 test functions. According to the experimental results, the algorithms were compared and performances of the algorithms were evaluated.


Sign in / Sign up

Export Citation Format

Share Document