scholarly journals An algorithm for linearly constrained programs with a partly linear objective function

1992 ◽  
Vol 16 (4) ◽  
pp. 41-50
Author(s):  
H. Greiner
Author(s):  
Ashok V. Kumar ◽  
David C. Gossard

Abstract A sequential approximation technique for non-linear programming is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables are very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are iteratively generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex. Computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead at each iteration, a few Newton-steps are taken for the sub-problem. A criteria for moving the move limit, is described that reduces or eliminates stepsize reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once per iteration.


Author(s):  
Ion Necoara ◽  
Martin Takáč

Abstract In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we develop new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent (RSD) and accelerated random sketch descent (A-RSD) methods. To our knowledge, this is the first convergence analysis of RSD algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and A-RSD, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best known bounds. Finally, we present several numerical examples to illustrate the performances of our new algorithms.


Sign in / Sign up

Export Citation Format

Share Document