An Approximate ADMM for Solving Linearly Constrained Nonsmooth Optimization Problems with Two Blocks of Variables

Author(s):  
Adil M. Bagirov ◽  
Sona Taheri ◽  
Fusheng Bai ◽  
Zhiyou Wu
2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Hamid Reza Erfanian ◽  
M. H. Noori Skandari ◽  
A. V. Kamyad

We present a new approach for solving nonsmooth optimization problems and a system of nonsmooth equations which is based on generalized derivative. For this purpose, we introduce the first order of generalized Taylor expansion of nonsmooth functions and replace it with smooth functions. In other words, nonsmooth function is approximated by a piecewise linear function based on generalized derivative. In the next step, we solve smooth linear optimization problem whose optimal solution is an approximate solution of main problem. Then, we apply the results for solving system of nonsmooth equations. Finally, for efficiency of our approach some numerical examples have been presented.


Author(s):  
Ashok V. Kumar ◽  
David C. Gossard

Abstract A sequential approximation technique for non-linear programming is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables are very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are iteratively generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex. Computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead at each iteration, a few Newton-steps are taken for the sub-problem. A criteria for moving the move limit, is described that reduces or eliminates stepsize reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once per iteration.


Author(s):  
Ion Necoara ◽  
Martin Takáč

Abstract In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we develop new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent (RSD) and accelerated random sketch descent (A-RSD) methods. To our knowledge, this is the first convergence analysis of RSD algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and A-RSD, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best known bounds. Finally, we present several numerical examples to illustrate the performances of our new algorithms.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Sha Lu ◽  
Zengxin Wei

Proximal point algorithm is a type of method widely used in solving optimization problems and some practical problems such as machine learning in recent years. In this paper, a framework of accelerated proximal point algorithm is presented for convex minimization with linear constraints. The algorithm can be seen as an extension to G u ¨ ler’s methods for unconstrained optimization and linear programming problems. We prove that the sequence generated by the algorithm converges to a KKT solution of the original problem under appropriate conditions with the convergence rate of O 1 / k 2 .


Sign in / Sign up

Export Citation Format

Share Document