A Sequential Approximation Method for Structural Optimization Using Logarithmic Barriers

Author(s):  
Ashok V. Kumar ◽  
David C. Gossard

Abstract A sequential approximation technique for non-linear programming is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables are very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are iteratively generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex. Computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead at each iteration, a few Newton-steps are taken for the sub-problem. A criteria for moving the move limit, is described that reduces or eliminates stepsize reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once per iteration.

1999 ◽  
Vol 122 (3) ◽  
pp. 271-277 ◽  
Author(s):  
Ashok V. Kumar

A sequential approximation algorithm is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables is very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex and computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead a few Newton-steps are taken for each sub-problem generated. A criterion, for setting the move limit, is described that reduces or eliminates step size reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It is particularly suitable for application to design of optimal shape and topology of structures by minimizing their compliance since it requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once for every sub-problem generated. [S1050-0472(00)01603-2]


Author(s):  
Ion Necoara ◽  
Martin Takáč

Abstract In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we develop new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent (RSD) and accelerated random sketch descent (A-RSD) methods. To our knowledge, this is the first convergence analysis of RSD algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and A-RSD, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best known bounds. Finally, we present several numerical examples to illustrate the performances of our new algorithms.


Filomat ◽  
2018 ◽  
Vol 32 (19) ◽  
pp. 6799-6807
Author(s):  
Natasa Krejic ◽  
Sanja Loncar

A nonmonotone line search method for solving unconstrained optimization problems with the objective function in the form of mathematical expectation is proposed and analyzed. The method works with approximate values of the objective function obtained with increasing sample sizes and improves accuracy gradually. Nonmonotone rule significantly enlarges the set of admissible search directions and prevents unnecessarily small steps at the beginning of the iterative procedure. The convergence is shown for any search direction that approaches the negative gradient in the limit. The convergence results are obtained in the sense of zero upper density. Initial numerical results confirm theoretical results and show efficiency of the proposed approach.


2014 ◽  
Vol 8 (1) ◽  
pp. 218-221 ◽  
Author(s):  
Ping Hu ◽  
Zong-yao Wang

We propose a non-monotone line search combination rule for unconstrained optimization problems, the corresponding non-monotone search algorithm is established and its global convergence can be proved. Finally, we use some numerical experiments to illustrate the new combination of non-monotone search algorithm’s effectiveness.


Author(s):  
Pengfei (Taylor) Li ◽  
Peirong (Slade) Wang ◽  
Farzana Chowdhury ◽  
Li Zhang

Traditional formulations for transportation optimization problems mostly build complicating attributes into constraints while keeping the succinctness of objective functions. A popular solution is the Lagrangian decomposition by relaxing complicating constraints and then solving iteratively. Although this approach is effective for many problems, it generates intractability in other problems. To address this issue, this paper presents an alternative formulation for transportation optimization problems in which the complicating attributes of target problems are partially or entirely built into the objective function instead of into the constraints. Many mathematical complicating constraints in transportation problems can be efficiently modeled in dynamic network loading (DNL) models based on the demand–supply equilibrium, such as the various road or vehicle capacity constraints or “IF–THEN” type constraints. After “pre-building” complicating constraints into the objective functions, the objective function can be approximated well with customized high-fidelity DNL models. Three types of computing benefits can be achieved in the alternative formulation: ( a) the original problem will be kept the same; ( b) computing complexity of the new formulation may be significantly reduced because of the disappearance of hard constraints; ( c) efficiency loss on the objective function side can be mitigated via multiple high-performance computing techniques. Under this new framework, high-fidelity and problem-specific DNL models will be critical to maintain the attributes of original problems. Therefore, the authors’ recent efforts in enhancing the DNL’s fidelity and computing efficiency are also described in the second part of this paper. Finally, a demonstration case study is conducted to validate the new approach.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yaoxin Li ◽  
Jing Liu ◽  
Guozheng Lin ◽  
Yueyuan Hou ◽  
Muyun Mou ◽  
...  

AbstractIn computer science, there exist a large number of optimization problems defined on graphs, that is to find a best node state configuration or a network structure, such that the designed objective function is optimized under some constraints. However, these problems are notorious for their hardness to solve, because most of them are NP-hard or NP-complete. Although traditional general methods such as simulated annealing (SA), genetic algorithms (GA), and so forth have been devised to these hard problems, their accuracy and time consumption are not satisfying in practice. In this work, we proposed a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks. By introducing Gumbel-softmax technique, we can optimize the objective function directly by gradient descent algorithm regardless of the discrete nature of variables. We also introduce evolution strategy to parallel version of our algorithm. We test our algorithm on four representative optimization problems on graph including modularity optimization from network science, Sherrington–Kirkpatrick (SK) model from statistical physics, maximum independent set (MIS) and minimum vertex cover (MVC) problem from combinatorial optimization on graph, and Influence Maximization problem from computational social science. High-quality solutions can be obtained with much less time-consuming compared to the traditional approaches.


Sign in / Sign up

Export Citation Format

Share Document