The inverse problem in convex optimization with linear constraints

2016 ◽  
Vol 23 (1) ◽  
pp. 71-94
Author(s):  
Marwan Aloqeili
2021 ◽  
pp. 1-28
Author(s):  
Yuan Shen ◽  
Yannian Zuo ◽  
Liming Sun ◽  
Xiayang Zhang

We consider the linearly constrained separable convex optimization problem whose objective function is separable with respect to [Formula: see text] blocks of variables. A bunch of methods have been proposed and extensively studied in the past decade. Specifically, a modified strictly contractive Peaceman–Rachford splitting method (SC-PRCM) [S. H. Jiang and M. Li, A modified strictly contractive Peaceman–Rachford splitting method for multi-block separable convex programming, J. Ind. Manag. Optim. 14(1) (2018) 397-412] has been well studied in the literature for the special case of [Formula: see text]. Based on the modified SC-PRCM, we present modified proximal symmetric ADMMs (MPSADMMs) to solve the multi-block problem. In MPSADMMs, all subproblems but the first one are attached with a simple proximal term, and the multipliers are updated twice. At the end of each iteration, the output is corrected via a simple correction step. Without stringent assumptions, we establish the global convergence result and the [Formula: see text] convergence rate in the ergodic sense for the new algorithms. Preliminary numerical results show that our proposed algorithms are effective for solving the linearly constrained quadratic programming and the robust principal component analysis problems.


2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Sakineh Tahmasebzadeh ◽  
Hamidreza Navidi ◽  
Alaeddin Malek

This paper proposes three numerical algorithms based on Karmarkar’s interior point technique for solving nonlinear convex programming problems subject to linear constraints. The first algorithm uses the Karmarkar idea and linearization of the objective function. The second and third algorithms are modification of the first algorithm using the Schrijver and Malek-Naseri approaches, respectively. These three novel schemes are tested against the algorithm of Kebiche-Keraghel-Yassine (KKY). It is shown that these three novel algorithms are more efficient and converge to the correct optimal solution, while the KKY algorithm fails in some cases. Numerical results are given to illustrate the performance of the proposed algorithms.


1995 ◽  
Vol 86 (2) ◽  
pp. 407-420 ◽  
Author(s):  
C. Y. Lin ◽  
B. Nekooie ◽  
M. K. H. Fan

2021 ◽  
Author(s):  
Apostolos Georgiadis ◽  
Nuno Borges Carvalho

<div><div><div><p>A convex optimization formulation is provided for antenna arrays comprising reactively loaded parasitic elements. The objective function consists of maximizing the array gain, while constraints on the admittance are provided in order to properly account for reactive loads. Topologies with two and three electrically small dipole arrays comprising one fed element and one or two parasitic elements respectively are considered and the conditions for obtaining supergain are investigated. The admittance constraints are formulated as linear constraints for specific cases as well as more general, quadratic constraints, which lead to the solution of an equivalent convex relaxation formulation. A design example for an electrically small superdirective rectenna is provided where an upper bound for the rectifier efficiency is simulated.</p></div></div></div>


Author(s):  
Pierre-Loïc Garoche

This chapter aims at providing the intuition behind convex optimization algorithms and addresses their effective use with floating-point implementation. It first briefly presents the algorithms, assuming a real semantics. As outlined in Chapter 4, convex conic programming is supported by different methods depending on the cone considered. The most known approach for linear constraints is the simplex method by Dantzig. While having an exponential-time complexity with respect to the number of constraints, the simplex method performs well in general. Another method is the set of interior point methods, initially proposed by Karmarkar and made popular by Nesterov and Nemirovski. They can be characterized as path-following methods in which a sequence of local linear problems are solved, typically by Newton's method. After these algorithms are considered, the chapter discusses approaches to obtain sound results.


Sign in / Sign up

Export Citation Format

Share Document