constrained optimization problems
Recently Published Documents


TOTAL DOCUMENTS

763
(FIVE YEARS 142)

H-INDEX

47
(FIVE YEARS 5)

2022 ◽  
Vol 41 (1) ◽  
pp. 1-10
Author(s):  
Jonas Zehnder ◽  
Stelian Coros ◽  
Bernhard Thomaszewski

We present a sparse Gauss-Newton solver for accelerated sensitivity analysis with applications to a wide range of equilibrium-constrained optimization problems. Dense Gauss-Newton solvers have shown promising convergence rates for inverse problems, but the cost of assembling and factorizing the associated matrices has so far been a major stumbling block. In this work, we show how the dense Gauss-Newton Hessian can be transformed into an equivalent sparse matrix that can be assembled and factorized much more efficiently. This leads to drastically reduced computation times for many inverse problems, which we demonstrate on a diverse set of examples. We furthermore show links between sensitivity analysis and nonlinear programming approaches based on Lagrange multipliers and prove equivalence under specific assumptions that apply for our problem setting.


Author(s):  
Helmut Gfrerer ◽  
Jane J. Ye ◽  
Jinchuan Zhou

In this paper, we study second-order optimality conditions for nonconvex set-constrained optimization problems. For a convex set-constrained optimization problem, it is well known that second-order optimality conditions involve the support function of the second-order tangent set. In this paper, we propose two approaches for establishing second-order optimality conditions for the nonconvex case. In the first approach, we extend the concept of the support function so that it is applicable to general nonconvex set-constrained problems, whereas in the second approach, we introduce the notion of the directional regular tangent cone and apply classical results of convex duality theory. Besides the second-order optimality conditions, the novelty of our approach lies in the systematic introduction and use, respectively, of directional versions of well-known concepts from variational analysis.


Author(s):  
Lei Yang ◽  
Xiaojun Chen ◽  
Shuhuang Xiang

In this paper, we consider a well-known sparse optimization problem that aims to find a sparse solution of a possibly noisy underdetermined system of linear equations. Mathematically, it can be modeled in a unified manner by minimizing [Formula: see text] subject to [Formula: see text] for given [Formula: see text] and [Formula: see text]. We then study various properties of the optimal solutions of this problem. Specifically, without any condition on the matrix A, we provide upper bounds in cardinality and infinity norm for the optimal solutions and show that all optimal solutions must be on the boundary of the feasible set when [Formula: see text]. Moreover, for [Formula: see text], we show that the problem with [Formula: see text] has a finite number of optimal solutions and prove that there exists [Formula: see text] such that the solution set of the problem with any [Formula: see text] is contained in the solution set of the problem with p = 0, and there further exists [Formula: see text] such that the solution set of the problem with any [Formula: see text] remains unchanged. An estimation of such [Formula: see text] is also provided. In addition, to solve the constrained nonconvex non-Lipschitz Lp-L1 problem ([Formula: see text] and q = 1), we propose a smoothing penalty method and show that, under some mild conditions, any cluster point of the sequence generated is a stationary point of our problem. Some numerical examples are given to implicitly illustrate the theoretical results and show the efficiency of the proposed algorithm for the constrained Lp-L1 problem under different noises.


Author(s):  
Zheyu Chen ◽  
Kin K. Leung ◽  
Shiqiang Wang ◽  
Leandros Tassiulas ◽  
Kevin Chan

Author(s):  
Michael Hintermüller ◽  
Kostas Papafitsoros ◽  
Guozhi Dong

Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging,  differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided.


Algorithms ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 294
Author(s):  
Rebekah Herrman ◽  
Lorna Treffert ◽  
James Ostrowski ◽  
Phillip C. Lotshaw ◽  
Travis S. Humble ◽  
...  

We develop a global variable substitution method that reduces n-variable monomials in combinatorial optimization problems to equivalent instances with monomials in fewer variables. We apply this technique to 3-SAT and analyze the optimal quantum unitary circuit depth needed to solve the reduced problem using the quantum approximate optimization algorithm. For benchmark 3-SAT problems, we find that the upper bound of the unitary circuit depth is smaller when the problem is formulated as a product and uses the substitution method to decompose gates than when the problem is written in the linear formulation, which requires no decomposition.


Sign in / Sign up

Export Citation Format

Share Document