scholarly journals Optimization problems, first order approximated optimization problems and their connections

2012 ◽  
Vol 28 (1) ◽  
pp. 133-141
Author(s):  
EMILIA-LOREDANA POP ◽  
◽  
DOREL I. DUCA ◽  

In this paper, we attach to the optimization problem ... where X is a subset of Rn, f : X → R, g = (g1, ..., gm) : X → Rm and h = (h1, ..., hq) : X → Rq are functions, the (0, 1) − η− approximated optimization problem (AP). We will study the connections between the optimal solutions for Problem (AP), the saddle points for Problem (AP), optimal solutions for Problem (P) and saddle points for Problem (P).

2012 ◽  
Vol 28 (1) ◽  
pp. 17-24
Author(s):  
HORATIU VASILE BONCEA ◽  
◽  
DOREL I. DUCA ◽  

Let X be a nonempty subset of Rn, x 0 be an interior point of X, f : X → R be a differentiable function at x 0 , g : X → Rm be a twice differentiable function at x 0 and η : X × X → Rn be a function. In this paper, we attach to the optimization problem ... the (1, 2)-η- approximated optimization problem ... and we will study the relations between the optimal solutions of Problem (P), the optimal solutions of Problem (AP), the saddle points of Problem (P) and saddle points of Problem (AP).


2012 ◽  
Vol 28 (1) ◽  
pp. 37-46
Author(s):  
LIANA CIOBAN ◽  
◽  
DOREL I. DUCA ◽  

In this paper, we attach to the optimization problem ... where X is a subset of Rn, f : X → R, g : X → Rm and h : X → Rq are three functions, m, n, q ∈ N, a (0, 2)-η-approximated optimization problem (AP). We will study the connections between the feasible solutions of the η-approximated problem and the feasible solutions of the original problem. Then we will study the connections between the optimal solutions of Problem (AP) and the optimal solutions of Problem (P) via the saddle points of the two problems.


2021 ◽  
Vol 12 (4) ◽  
pp. 81-100
Author(s):  
Yao Peng ◽  
Zepeng Shen ◽  
Shiqi Wang

Multimodal optimization problem exists in multiple global and many local optimal solutions. The difficulty of solving these problems is finding as many local optimal peaks as possible on the premise of ensuring global optimal precision. This article presents adaptive grouping brainstorm optimization (AGBSO) for solving these problems. In this article, adaptive grouping strategy is proposed for achieving adaptive grouping without providing any prior knowledge by users. For enhancing the diversity and accuracy of the optimal algorithm, elite reservation strategy is proposed to put central particles into an elite pool, and peak detection strategy is proposed to delete particles far from optimal peaks in the elite pool. Finally, this article uses testing functions with different dimensions to compare the convergence, accuracy, and diversity of AGBSO with BSO. Experiments verify that AGBSO has great localization ability for local optimal solutions while ensuring the accuracy of the global optimal solutions.


2011 ◽  
Vol 421 ◽  
pp. 559-563
Author(s):  
Yong Chao Gao ◽  
Li Mei Liu ◽  
Heng Qian ◽  
Ding Wang

The scale and complexity of search space are important factors deciding the solving difficulty of an optimization problem. The information of solution space may lead searching to optimal solutions. Based on this, an algorithm for combinatorial optimization is proposed. This algorithm makes use of the good solutions found by intelligent algorithms, contracts the search space and partitions it into one or several optimal regions by backbones of combinatorial optimization solutions. And optimization of small-scale problems is carried out in optimal regions. Statistical analysis is not necessary before or through the solving process in this algorithm, and solution information is used to estimate the landscape of search space, which enhances the speed of solving and solution quality. The algorithm breaks a new path for solving combinatorial optimization problems, and the results of experiments also testify its efficiency.


2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Zhi-Ang Zhou

We studyϵ-Henig saddle points and duality of set-valued optimization problems in the setting of real linear spaces. Firstly, an equivalent characterization ofϵ-Henig saddle point of the Lagrangian set-valued map is obtained. Secondly, under the assumption of the generalized cone subconvexlikeness of set-valued maps, the relationship between theϵ-Henig saddle point of the Lagrangian set-valued map and theϵ-Henig properly efficient element of the set-valued optimization problem is presented. Finally, some duality theorems are given.


2016 ◽  
Vol 685 ◽  
pp. 142-147
Author(s):  
Vladimir Gorbunov ◽  
Elena Sinyukova

In this paper the authors describe necessary conditions of optimality for continuous multicriteria optimization problems. It is proved that the existence of effective solutions requires that the gradients of individual criteria were linearly dependent. The set of solutions is given by system of equations. It is shown that for finding necessary and sufficient conditions for multicriteria optimization problems, it is necessary to switch to the single-criterion optimization problem with the objective function, which is the convolution of individual criteria. These results are consistent with non-linear optimization problems with equality constraints. An example can be the study of optimal solutions obtained by the method of the main criterion for Pareto optimality.


2018 ◽  
Vol 34 (1) ◽  
pp. 01-07
Author(s):  
TADEUSZ ANTCZAK ◽  

In this paper, a new approximation method for a characterization of optimal solutions in a class of nonconvex differentiable optimization problems is introduced. In this method, an auxiliary optimization problem is constructed for the considered nonconvex extremum problem. The equivalence between optimal solutions in the considered differentiable extremum problem and its approximated optimization problem is established under (Φ, ρ)-invexity hypotheses.


2012 ◽  
Vol 20 (1) ◽  
pp. 27-62 ◽  
Author(s):  
Kalyanmoy Deb ◽  
Amit Saha

In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.


2017 ◽  
Vol 27 (2) ◽  
pp. 153-167 ◽  
Author(s):  
M. Dhingra ◽  
C.S. Lalitha

In this paper we introduce a notion of minimal solutions for set-valued optimization problem in terms of improvement sets, by unifying a solution notion, introduced by Kuroiwa [15] for set-valued problems, and a notion of optimal solutions in terms of improvement sets, introduced by Chicco et al. [4] for vector optimization problems. We provide existence theorems for these solutions, and establish lower convergence of the minimal solution sets in the sense of Painlev?-Kuratowski.


2018 ◽  
Vol 115 (7) ◽  
pp. 1457-1462 ◽  
Author(s):  
Carlo Baldassi ◽  
Riccardo Zecchina

Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions.


Sign in / Sign up

Export Citation Format

Share Document