A Sequential Optimization Algorithm Using Logarithmic Barriers: Applications to Structural Optimization

1999 ◽  
Vol 122 (3) ◽  
pp. 271-277 ◽  
Author(s):  
Ashok V. Kumar

A sequential approximation algorithm is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables is very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex and computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead a few Newton-steps are taken for each sub-problem generated. A criterion, for setting the move limit, is described that reduces or eliminates step size reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It is particularly suitable for application to design of optimal shape and topology of structures by minimizing their compliance since it requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once for every sub-problem generated. [S1050-0472(00)01603-2]

Author(s):  
Ashok V. Kumar ◽  
David C. Gossard

Abstract A sequential approximation technique for non-linear programming is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables are very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are iteratively generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex. Computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead at each iteration, a few Newton-steps are taken for the sub-problem. A criteria for moving the move limit, is described that reduces or eliminates stepsize reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once per iteration.


2020 ◽  
Vol 6 (8) ◽  
pp. 1411-1427 ◽  
Author(s):  
Yan-Cang Li ◽  
Pei-Dong Xu

In order to find a more effective method in structural optimization, an improved wolf pack optimization algorithm was proposed. In the traditional wolf pack algorithm, the problem of falling into local optimum and low precision often occurs. Therefore, the adaptive step size search and Levy's flight strategy theory were employed to overcome the premature flaw of the basic wolf pack algorithm. Firstly, the reasonable change of the adaptive step size improved the fineness of the search and effectively accelerated the convergence speed. Secondly, the search strategy of Levy's flight was adopted to expand the search scope and improved the global search ability of the algorithm. At last, to verify the performance of improved wolf pack algorithm, it was tested through simulation experiments and actual cases, and compared with other algorithms. Experiments show that the improved wolf pack algorithm has better global optimization ability. This study provides a more effective solution to structural optimization problems.


2021 ◽  
Vol 2 (1) ◽  
pp. 33
Author(s):  
Nasiru Salihu ◽  
Mathew Remilekun Odekunle ◽  
Also Mohammed Saleh ◽  
Suraj Salihu

Some problems have no analytical solution or too difficult to solve by scientists, engineers, and mathematicians, so the development of numerical methods to obtain approximate solutions became necessary. Gradient methods are more efficient when the function to be minimized continuously in its first derivative. Therefore, this article presents a new hybrid Conjugate Gradient (CG) method to solve unconstrained optimization problems. The method requires the first-order derivatives but overcomes the steepest descent method’s shortcoming of slow convergence and needs not to save or compute the second-order derivatives needed by the Newton method. The CG update parameter is suggested from the Dai-Liao conjugacy condition as a convex combination of Hestenes-Stiefel and Fletcher-Revees algorithms by employing an optimal modulating choice parameterto avoid matrix storage. Numerical computation adopts an inexact line search to obtain the step-size that generates a decent property, showing that the algorithm is robust and efficient. The scheme converges globally under Wolfe line search, and it’s like is suitable in compressive sensing problems and M-tensor systems.


Author(s):  
Ion Necoara ◽  
Martin Takáč

Abstract In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we develop new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent (RSD) and accelerated random sketch descent (A-RSD) methods. To our knowledge, this is the first convergence analysis of RSD algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and A-RSD, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best known bounds. Finally, we present several numerical examples to illustrate the performances of our new algorithms.


2021 ◽  
Vol 66 (4) ◽  
pp. 783-792
Author(s):  
Selma Lamri ◽  
◽  
Bachir Merikhi ◽  
Mohamed Achache ◽  
◽  
...  

In this paper, a weighted logarithmic barrier interior-point method for solving the linearly convex constrained optimization problems is presented. Unlike the classical central-path, the barrier parameter associated with the per- turbed barrier problems, is not a scalar but is a weighted positive vector. This modi cation gives a theoretical exibility on its convergence and its numerical performance. In addition, this method is of a Newton descent direction and the computation of the step-size along this direction is based on a new e cient tech- nique called the tangent method. The practical e ciency of our approach is shown by giving some numerical results.


2021 ◽  
Vol 11 (10) ◽  
pp. 4708
Author(s):  
Junho Chun

Structural optimization aims to achieve a structural design that provides the best performance while satisfying the given design constraints. When uncertainties in design and conditions are taken into account, reliability-based design optimization (RBDO) is adopted to identify solutions with acceptable failure probabilities. This paper outlines a method for sensitivity analysis, reliability assessment, and RBDO for structures. Complex-step (CS) approximation and the first-order reliability method (FORM) are unified in the sensitivity analysis of a probabilistic constraint, which streamlines the setup of optimization problems and enhances their implementation in RBDO. Complex-step approximation utilizes an imaginary number as a step size to compute the first derivative without subtractive cancellations in the formula, which have been observed to significantly affect the accuracy of calculations in finite difference methods. Thus, the proposed method can select a very small step size for the first derivative to minimize truncation errors, while achieving accuracy within the machine precision. This approach integrates complex-step approximation into the FORM to compute sensitivity and assess reliability. The proposed method of RBDO is tested on structural optimization problems across a range of statistical variations, demonstrating that performance benefits can be achieved while satisfying precise probabilistic constraints.


Filomat ◽  
2018 ◽  
Vol 32 (19) ◽  
pp. 6799-6807
Author(s):  
Natasa Krejic ◽  
Sanja Loncar

A nonmonotone line search method for solving unconstrained optimization problems with the objective function in the form of mathematical expectation is proposed and analyzed. The method works with approximate values of the objective function obtained with increasing sample sizes and improves accuracy gradually. Nonmonotone rule significantly enlarges the set of admissible search directions and prevents unnecessarily small steps at the beginning of the iterative procedure. The convergence is shown for any search direction that approaches the negative gradient in the limit. The convergence results are obtained in the sense of zero upper density. Initial numerical results confirm theoretical results and show efficiency of the proposed approach.


2014 ◽  
Vol 8 (1) ◽  
pp. 218-221 ◽  
Author(s):  
Ping Hu ◽  
Zong-yao Wang

We propose a non-monotone line search combination rule for unconstrained optimization problems, the corresponding non-monotone search algorithm is established and its global convergence can be proved. Finally, we use some numerical experiments to illustrate the new combination of non-monotone search algorithm’s effectiveness.


Sign in / Sign up

Export Citation Format

Share Document