Approaches to discrete optimization problems with interval objective function

Author(s):  
Александр Вячеславович Пролубников

В работе дается обзор подходов к решению задач дискретной оптимизации с интервальной целевой функцией. Эти подходы рассматриваются в общем контексте исследований оптимизационных задач с неопределенностями в постановках. Приводятся варианты концепций оптимальности решений для задач дискретной оптимизации с интервальной целевой функцией - робастные решения, множества решений, оптимальных по Парето, слабые и сильные оптимальные решения, объединенные множества решений и др. Оценивается предпочтительность выбора той или иной концепции оптимальности при решении задач и отмечаются ограничения для применения использующих их подходов Optimization problems with uncertainties in their input data have been investigated by many researchers in different directions. There are a lot of sources of the uncertainties in the input data for applied problems. Inaccurate measurements and variability of the parameters with time are some of such sources. The interval of possible values of uncertain parameter is the natural and the only possible way to represent the uncertainty for a wide share of applied problems. We consider discrete optimization problems with interval uncertainties in their objective functions. The purpose of the paper is to provide an overview of the investigations in this field. The overview is given in the overall context of the researches of optimization problems with uncertainties. We review the interval approaches for the discrete optimization problem with interval objective function. The approaches we consider operate with the interval values and are focused on obtaining possible solutions or certain sets of the solutions that are optimal according to some concepts of optimality that are used by the approaches. We consider the different concepts of optimality: robust solutions, the Pareto sets, weak and strong solutions, the united solution sets, the sets of possible approximate solutions that correspond to possible values of uncertain parameters. All the approaches we consider allow absence of information on probabilistic distribution on intervals of possible values of parameters, though some of them may use the information to evaluate the probabilities of possible solutions, the distribution on the interval of possible objective function values for the solutions, etc. We assess the possibilities and limitations of the considered approaches

2021 ◽  
Vol 2078 (1) ◽  
pp. 012018
Author(s):  
Qinglong Chen ◽  
Yong Peng ◽  
Miao Zhang ◽  
Quanjun Yin

Abstract Particle Swarm Optimization (PSO) is kind of algorithm that can be used to solve optimization problems. In practice, many optimization problems are discrete but PSO algorithm was initially designed to meet the requirements of continuous problems. A lot of researches had made efforts to handle this case and varieties of discrete PSO algorithms were proposed. However, these algorithms just focus on the specific problem, and the performance of it significantly degrades when extending the algorithm to other problems. For now, there is no reasonable unified principle or method for analyzing the application of PSO algorithm in discrete optimization problem, which limits the development of discrete PSO algorithm. To address the challenge, we first give an investigation of PSO algorithm from the perspective of spatial search, then, try to give a novel analysis of the key feature changes when PSO algorithm is applied to discrete optimization, and propose a classification method to summary existing discrete PSO algorithms.


2010 ◽  
Vol 439-440 ◽  
pp. 1493-1498 ◽  
Author(s):  
Guo Ping Hou ◽  
Xuan Ma

Differential evolution (DE) is an evolutionary algorithm that is based on the individual differential reconstruction idea. It is proposed by Stom and Price in 1997, and is very suitable to solve optimization problem over continuous spaces. First of all, with the introduction of concepts of differential operator (DO), etc., the concise description of DE is given and the analysis of its main features is advanced. For solving discrete optimization problem using DE, a new operator, mapping operator, in the new algorithm was used to ensure the original mutation operator still effective. Then a new S operator, with sigmoid function, was used to keep the result of the mutation operator falls in the interval [0, 1]. The algorithm not only has the advantages of DE, but also is very suitable to solve discrete optimization problems. Calculations of 0/1 knapsack problem show that algorithm has better convergence capability and stability.


Author(s):  
Boris Melnikov ◽  
◽  
Elena Melnikova ◽  
Svetlana Pivneva ◽  
Vladislav Dudnikov ◽  
...  

We consider in this paper the adaptation of heuristics used for programming nondeterministic games to the problems of discrete optimization. In particular, we use some “game” heuristic methods of decision-making in various discrete optimization problems. The object of each of these problems is programming anytime algorithms. Among the problems described in this paper, there are the classical traveling salesman problem and some connected problems of minimization for nondeterministic finite automata. The first of the considered methods is the geometrical approach to some discrete optimization problems. For this approach, we define some special characteristics relating to some initial particular case of considered discrete optimization problem. For instance, one of such statistical characteristics for the traveling salesman problem is a significant development of the so-called “distance functions” up to the geometric variant such problem. And using this distance, we choose the corresponding specific algorithms for solving the problem. Besides, other considered methods for solving these problems are constructed on the basis of special combination of some heuristics, which belong to some different areas of the theory of artificial intelligence. More precisely, we shall use some modifications of unfinished branchand-bound method; for the selecting immediate step using some heuristics, we apply dynamic risk functions; simultaneously for the selection of coefficients of the averaging-out, we also use genetic algorithms; and the reductive self-learning by the same genetic methods is also used for the start of unfinished branch-and-bound method again. This combination of heuristics represents a special approach to construction of anytime-algorithms for the discrete optimization problems. This approach can be considered as an alternative to application of methods of linear programming, and to methods of multi-agent optimization, and also to neural networks.


Author(s):  
Asieh Khosravanian ◽  
Mohammad Rahmanimanesh ◽  
Parviz Keshavarzi

The Social Spider Algorithm (SSA) was introduced based on the information-sharing foraging strategy of spiders to solve the continuous optimization problems. SSA was shown to have better performance than the other state-of-the-art meta-heuristic algorithms in terms of best-achieved fitness values, scalability, reliability, and convergence speed. By preserving all strengths and outstanding performance of SSA, we propose a novel algorithm named Discrete Social Spider Algorithm (DSSA), for solving discrete optimization problems by making some modifications to the calculation of distance function, construction of follow position, the movement method, and the fitness function of the original SSA. DSSA is employed to solve the symmetric and asymmetric traveling salesman problems. To prove the effectiveness of DSSA, TSPLIB benchmarks are used, and the results have been compared to the results obtained by six different optimization methods: discrete bat algorithm (IBA), genetic algorithm (GA), an island-based distributed genetic algorithm (IDGA), evolutionary simulated annealing (ESA), discrete imperialist competitive algorithm (DICA) and a discrete firefly algorithm (DFA). The simulation results demonstrate that DSSA outperforms the other techniques. The experimental results show that our method is better than other evolutionary algorithms for solving the TSP problems. DSSA can also be used for any other discrete optimization problem, such as routing problems.


2010 ◽  
Vol 36 ◽  
pp. 279-286 ◽  
Author(s):  
Roberto Quirino do Nascimento ◽  
Edson Figueiredo Lima ◽  
Rubia Mara de Oliveira Santos

Sign in / Sign up

Export Citation Format

Share Document