An Efficient Strategy for the Robust Optimization of Large Scale Nonlinear Design Problems

Author(s):  
Balaji Ramakrishnan ◽  
S. S. Rao

Abstract The application of the concept of robust design, based on Taguchi’s design philosophy, in formulating and solving large, computationally intensive, nonlinear optimization problems whose analysis is based on a linear system of equations is investigated. The design problem is formulated using a robust optimization procedure that utilizes the expected value of Taguchi’s loss function as the objective. An efficient solution scheme that uses approximate expressions for the gradients and employs a fast reanalysis technique for their evaluation is introduced. This approach is validated by solving a simple minimum cost welded beam design problem; where the dimensions of the weldment and the beam are found without exceeding the limitations stated on the shear stress in the weld, normal stress in the beam, buckling load on the beam and tip deflection of the beam. The method is then used to obtain the optimal shape of an engine connecting rod, that minimizes its weight when subject to geometric constraints on the shape variables, and behavioral constraints such as stress and buckling loads. The results obtained by solving the conventional and robust formulations of this problem, and the considerable savings in time that result by virtue of using the fast reanalysis technique are presented. The methodology presented in this work is expected to be useful in reducing the computational effort in obtaining insensitive designs of large structures and machines.

2010 ◽  
Vol 2010 ◽  
pp. 1-16 ◽  
Author(s):  
Paulraj S. ◽  
Sumathi P.

The objective function and the constraints can be formulated as linear functions of independent variables in most of the real-world optimization problems. Linear Programming (LP) is the process of optimizing a linear function subject to a finite number of linear equality and inequality constraints. Solving linear programming problems efficiently has always been a fascinating pursuit for computer scientists and mathematicians. The computational complexity of any linear programming problem depends on the number of constraints and variables of the LP problem. Quite often large-scale LP problems may contain many constraints which are redundant or cause infeasibility on account of inefficient formulation or some errors in data input. The presence of redundant constraints does not alter the optimal solutions(s). Nevertheless, they may consume extra computational effort. Many researchers have proposed different approaches for identifying the redundant constraints in linear programming problems. This paper compares five of such methods and discusses the efficiency of each method by solving various size LP problems and netlib problems. The algorithms of each method are coded by using a computer programming language C. The computational results are presented and analyzed in this paper.


Author(s):  
Nicolò Mazzi ◽  
Andreas Grothey ◽  
Ken McKinnon ◽  
Nagisa Sugishita

AbstractThis paper proposes an algorithm to efficiently solve large optimization problems which exhibit a column bounded block-diagonal structure, where subproblems differ in right-hand side and cost coefficients. Similar problems are often tackled using cutting-plane algorithms, which allow for an iterative and decomposed solution of the problem. When solving subproblems is computationally expensive and the set of subproblems is large, cutting-plane algorithms may slow down severely. In this context we propose two novel adaptive oracles that yield inexact information, potentially much faster than solving the subproblem. The first adaptive oracle is used to generate inexact but valid cutting planes, and the second adaptive oracle gives a valid upper bound of the true optimal objective. These two oracles progressively “adapt” towards the true exact oracle if provided with an increasing number of exact solutions, stored throughout the iterations. These adaptive oracles are embedded within a Benders-type algorithm able to handle inexact information. We compare the Benders with adaptive oracles against a standard Benders algorithm on a stochastic investment planning problem. The proposed algorithm shows the capability to substantially reduce the computational effort to obtain an $$\epsilon $$ ϵ -optimal solution: an illustrative case is 31.9 times faster for a $$1.00\%$$ 1.00 % convergence tolerance and 15.4 times faster for a $$0.01\%$$ 0.01 % tolerance.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1529
Author(s):  
Jung-Fa Tsai ◽  
Ming-Hua Lin ◽  
Duan-Yi Wen

Several structural design problems that involve continuous and discrete variables are very challenging because of the combinatorial and non-convex characteristics of the problems. Although the deterministic optimization approach theoretically guarantees to find the global optimum, it usually leads to a significant burden in computational time. This article studies the deterministic approach for globally solving mixed–discrete structural optimization problems. An improved method that symmetrically reduces the number of constraints for linearly expressing signomial terms with pure discrete variables is applied to significantly enhance the computational efficiency of obtaining the exact global optimum of the mixed–discrete structural design problem. Numerical experiments of solving the stepped cantilever beam design problem and the pressure vessel design problem are conducted to show the efficiency and effectiveness of the presented approach. Compared with existing methods, this study introduces fewer convex terms and constraints for transforming the mixed–discrete structural problem and uses much less computational time for solving the reformulated problem to global optimality.


2021 ◽  
Vol 11 (24) ◽  
pp. 12005
Author(s):  
Nikos Ath. Kallioras ◽  
Alexandros N. Nordas ◽  
Nikos D. Lagaros

Topology optimization problems pose substantial requirements in computing resources, which become prohibitive in cases of large-scale design domains discretized with fine finite element meshes. A Deep Learning-assisted Topology OPtimization (DLTOP) methodology was previously developed by the authors, which employs deep learning techniques to predict the optimized system configuration, thus substantially reducing the required computational effort of the optimization algorithm and overcoming potential bottlenecks. Building upon DLTOP, this study presents a novel Deep Learning-based Model Upgrading (DLMU) scheme. The scheme utilizes reduced order (surrogate) modeling techniques, which downscale complex models while preserving their original behavioral characteristics, thereby reducing the computational demand with limited impact on accuracy. The novelty of DLMU lies in the employment of deep learning for extrapolating the results of optimized reduced order models to an optimized fully refined model of the design domain, thus achieving a remarkable reduction of the computational demand in comparison with DLTOP and other existing techniques. The effectiveness, accuracy and versatility of the novel DLMU scheme are demonstrated via its application to a series of benchmark topology optimization problems from the literature.


2011 ◽  
Vol 19 (4) ◽  
pp. 525-560 ◽  
Author(s):  
Rajan Filomeno Coelho ◽  
Philippe Bouillard

This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems—for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation—the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.


Author(s):  
W. Hu ◽  
M. Li ◽  
S. Azarm ◽  
S. Al Hashimi ◽  
A. Almansoori ◽  
...  

Many real-world engineering design optimization problems are multi-objective and have uncertainty in their parameters. For such problems it is useful to obtain design solutions that are both multi-objectively optimum and robust. A robust design is one whose objective and constraint function variations under uncertainty are within an acceptable range. While the literature reports on many techniques in robust optimization for single objective optimization problems, very few papers report on methods in robust optimization for multi-objective optimization problems. The Multi-Objective Robust Optimization (MORO) technique with interval uncertainty proposed in this paper is a significant improvement, with respect to computational effort, over a previously reported MORO technique. In the proposed technique, a master problem solves a relaxed optimization problem whose feasible domain is iteratively confined by constraint cuts determined by the solutions from a sub-problem. The proposed approach and the synergy between the master problem and sub-problem are demonstrated by three examples. The results obtained show a general agreement between the solutions from the proposed MORO and the previous MORO technique. Moreover, the number of function calls for obtaining solutions from the proposed technique is an order of magnitude less than that from the previous MORO technique.


Author(s):  
Paul Cronin ◽  
Harry Woerde ◽  
Rob Vasbinder

1999 ◽  
Vol 9 (3) ◽  
pp. 755-778 ◽  
Author(s):  
Paul T. Boggs ◽  
Anthony J. Kearsley ◽  
Jon W. Tolle

Sign in / Sign up

Export Citation Format

Share Document