scholarly journals Convergence of Subtangent-Based Relaxations of Nonlinear Programs

Processes ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 221 ◽  
Author(s):  
Huiyi Cao ◽  
Yingkai Song ◽  
Kamil A. Khan

Convex relaxations of functions are used to provide bounding information to deterministic global optimization methods for nonconvex systems. To be useful, these relaxations must converge rapidly to the original system as the considered domain shrinks. This article examines the convergence rates of convex outer approximations for functions and nonlinear programs (NLPs), constructed using affine subtangents of an existing convex relaxation scheme. It is shown that these outer approximations inherit rapid second-order pointwise convergence from the original scheme under certain assumptions. To support this analysis, the notion of second-order pointwise convergence is extended to constrained optimization problems, and general sufficient conditions for guaranteeing this convergence are developed. The implications are discussed. An implementation of subtangent-based relaxations of NLPs in Julia is discussed and is applied to example problems for illustration.

2021 ◽  
Vol 11 (8) ◽  
pp. 3430
Author(s):  
Erik Cuevas ◽  
Héctor Becerra ◽  
Héctor Escobar ◽  
Alberto Luque-Chang ◽  
Marco Pérez ◽  
...  

Recently, several new metaheuristic schemes have been introduced in the literature. Although all these approaches consider very different phenomena as metaphors, the search patterns used to explore the search space are very similar. On the other hand, second-order systems are models that present different temporal behaviors depending on the value of their parameters. Such temporal behaviors can be conceived as search patterns with multiple behaviors and simple configurations. In this paper, a set of new search patterns are introduced to explore the search space efficiently. They emulate the response of a second-order system. The proposed set of search patterns have been integrated as a complete search strategy, called Second-Order Algorithm (SOA), to obtain the global solution of complex optimization problems. To analyze the performance of the proposed scheme, it has been compared in a set of representative optimization problems, including multimodal, unimodal, and hybrid benchmark formulations. Numerical results demonstrate that the proposed SOA method exhibits remarkable performance in terms of accuracy and high convergence rates.


Author(s):  
Ion Necoara ◽  
Martin Takáč

Abstract In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we develop new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent (RSD) and accelerated random sketch descent (A-RSD) methods. To our knowledge, this is the first convergence analysis of RSD algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and A-RSD, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best known bounds. Finally, we present several numerical examples to illustrate the performances of our new algorithms.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. F1-F15 ◽  
Author(s):  
Ludovic Métivier ◽  
Romain Brossier

The SEISCOPE optimization toolbox is a set of FORTRAN 90 routines, which implement first-order methods (steepest-descent and nonlinear conjugate gradient) and second-order methods ([Formula: see text]-BFGS and truncated Newton), for the solution of large-scale nonlinear optimization problems. An efficient line-search strategy ensures the robustness of these implementations. The routines are proposed as black boxes easy to interface with any computational code, where such large-scale minimization problems have to be solved. Traveltime tomography, least-squares migration, or full-waveform inversion are examples of such problems in the context of geophysics. Integrating the toolbox for solving this class of problems presents two advantages. First, it helps to separate the routines depending on the physics of the problem from the ones related to the minimization itself, thanks to the reverse communication protocol. This enhances flexibility in code development and maintenance. Second, it allows us to switch easily between different optimization algorithms. In particular, it reduces the complexity related to the implementation of second-order methods. Because the latter benefit from faster convergence rates compared to first-order methods, significant improvements in terms of computational efforts can be expected.


Author(s):  
Tung Nguyen

We propose a generalized second-order asymptotic contingent epiderivative of a set-valued mapping, study its properties, as well as relations to some second-order contingent epiderivatives, and sufficient conditions for its existence. Then, using these epiderivatives, we investigate set-valued optimization problems with generalized inequality constraints. Both second-order necessary conditions and sufficient  conditions for optimality of the Karush-Kuhn-Tucker type are established under the second-order constraint qualification. An application to Mond-Weir and Wolfe duality schemes is also presented. Some remarks and examples are provided to illustrate our results.


Author(s):  
J. H. Chou ◽  
Wei-Shen Hsia ◽  
Tan-Yu Lee

AbstractSecond order necessary and sufficient conditions are given for a class of optimization problems involving optimal selection of a measurable subset from a given measure subspace subject to set function inequalities. Relations between twice-differentiability at Ω and local convexity at Ω are also discussed.


2019 ◽  
Vol 487 (5) ◽  
pp. 493-495
Author(s):  
Yu. G. Evtushenko ◽  
A. A. Tret’yakov

In this paper, we consider new sufficient conditions of optimality of the second-order for equality constrained optimization problems, which essentially enhance and complement the classical ones and are constructive. For example, they establish equivalence between sufficient conditions in the equality constrained optimization problems and sufficient conditions for optimality in equality constrained problems by reducing the latter to equalities with the help of introducing slack variables. Previously, when using the classical sufficient optimality conditions, this fact was not considered to be true, that is, the existing classical sufficient conditions were not complete, so the proposed optimality conditions complement the classical ones and close the question of the equivalence of the problems with inequalities and the problems with equalities when reducing the first to the second by introducing slack variables.


2011 ◽  
Vol 2011 ◽  
pp. 1-16 ◽  
Author(s):  
Qilin Wang ◽  
Guolin Yu

Some new properties are obtained for generalized second-order contingent (adjacent) epiderivatives of set-valued maps. By employing the generalized second-order adjacent epiderivatives, necessary and sufficient conditions of Benson proper efficient solutions are given for set-valued optimization problems. The results obtained improve the corresponding results in the literature.


Sign in / Sign up

Export Citation Format

Share Document