A Mixed Interval Arithmetic/Affine Arithmetic Approach for Robust Design Optimization With Interval Uncertainty

2016 ◽  
Vol 138 (4) ◽  
Author(s):  
Shaobo Wang ◽  
Xiangyun Qing

Uncertainty is ubiquitous throughout engineering design processes. Robust optimization (RO) aims to find optimal solutions that are relatively insensitive to input uncertainty. In this paper, a new approach is presented for single-objective RO problems with an objective function and constraints that are continuous and differentiable. Both the design variables and parameters with interval uncertainties are represented as affine forms. A mixed interval arithmetic (IA)/affine arithmetic (AA) model is subsequently utilized in order to obtain affine approximations for the objective and feasibility robustness constraint functions. Consequently, the RO problem is converted to a deterministic problem, by bounding all constraints. Finally, nonlinear optimization solvers are applied to obtain a robust optimal solution for the deterministic optimization problem. Some numerical and engineering examples are presented in order to demonstrate the advantages and disadvantages of the proposed approach. The main advantage of the proposed approach lies in the simplicity of the conversion from a nonlinear RO problem with interval uncertainty to a deterministic single-looped optimization problem. Although this approach cannot be applied to problems with black-box models, it requires a minimal use of IA/AA computation and applies some widely used advanced solvers to single-looped optimization problems, making it more suitable for applications in engineering fields.

2012 ◽  
Vol 134 (10) ◽  
Author(s):  
Jianhua Zhou ◽  
Shuo Cheng ◽  
Mian Li

Uncertainty plays a critical role in engineering design as even a small amount of uncertainty could make an optimal design solution infeasible. The goal of robust optimization is to find a solution that is both optimal and insensitive to uncertainty that may exist in parameters and design variables. In this paper, a novel approach, sequential quadratic programming for robust optimization (SQP-RO), is proposed to solve single-objective continuous nonlinear optimization problems with interval uncertainty in parameters and design variables. This new SQP-RO is developed based on a classic SQP procedure with additional calculations for constraints on objective robustness, feasibility robustness, or both. The obtained solution is locally optimal and robust. Eight numerical and engineering examples with different levels of complexity are utilized to demonstrate the applicability and efficiency of the proposed SQP-RO with the comparison to its deterministic SQP counterpart and RO approaches using genetic algorithms. The objective and/or feasibility robustness are verified via Monte Carlo simulations.


Author(s):  
Jianhua Zhou ◽  
Shuo Cheng ◽  
Mian Li

Uncertainty plays a critical role in engineering design as even a small amount of uncertainty could make an optimal design solution infeasible. The goal of robust optimization is to find a solution that is both optimal and insensitive to uncertainty that may exist in parameters and design variables. In this paper, a novel approach, Sequential Quadratic Programing for Robust Optimization (SQP-RO), is proposed to solve single-objective continuous nonlinear optimization problems with interval uncertainty in parameters and design variables. This new SQP-RO is developed based on a classic SQP procedure with additional calculations for constraints on objective robustness, feasibility robustness, or both. The obtained solution is locally optimal and robust. Eight numerical and engineering examples with different levels of complexity are utilized to demonstrate the applicability and efficiency of the proposed SQP-RO with the comparison to its deterministic SQP counterpart and RO approaches using genetic algorithms. The objective and/or feasibility robustness are verified via Monte Carlo simulations.


2011 ◽  
Vol 133 (6) ◽  
Author(s):  
W. Hu ◽  
M. Li ◽  
S. Azarm ◽  
A. Almansoori

Many engineering optimization problems are multi-objective, constrained and have uncertainty in their inputs. For such problems it is desirable to obtain solutions that are multi-objectively optimum and robust. A robust solution is one that as a result of input uncertainty has variations in its objective and constraint functions which are within an acceptable range. This paper presents a new approximation-assisted MORO (AA-MORO) technique with interval uncertainty. The technique is a significant improvement, in terms of computational effort, over previously reported MORO techniques. AA-MORO includes an upper-level problem that solves a multi-objective optimization problem whose feasible domain is iteratively restricted by constraint cuts determined by a lower-level optimization problem. AA-MORO also includes an online approximation wherein optimal solutions from the upper- and lower-level optimization problems are used to iteratively improve an approximation to the objective and constraint functions. Several examples are used to test the proposed technique. The test results show that the proposed AA-MORO reasonably approximates solutions obtained from previous MORO approaches while its computational effort, in terms of the number of function calls, is significantly reduced compared to the previous approaches.


Mathematics ◽  
2018 ◽  
Vol 7 (1) ◽  
pp. 12 ◽  
Author(s):  
Xiangkai Sun ◽  
Hongyong Fu ◽  
Jing Zeng

This paper deals with robust quasi approximate optimal solutions for a nonsmooth semi-infinite optimization problems with uncertainty data. By virtue of the epigraphs of the conjugates of the constraint functions, we first introduce a robust type closed convex constraint qualification. Then, by using the robust type closed convex constraint qualification and robust optimization technique, we obtain some necessary and sufficient optimality conditions for robust quasi approximate optimal solution and exact optimal solution of this nonsmooth uncertain semi-infinite optimization problem. Moreover, the obtained results in this paper are applied to a nonsmooth uncertain optimization problem with cone constraints.


2012 ◽  
Vol 433-440 ◽  
pp. 2808-2816
Author(s):  
Jian Jin Zheng ◽  
You Shen Xia

This paper presents a new interactive neural network for solving constrained multi-objective optimization problems. The constrained multi-objective optimization problem is reformulated into two constrained single objective optimization problems and two neural networks are designed to obtain the optimal weight and the optimal solution of the two optimization problems respectively. The proposed algorithm has a low computational complexity and is easy to be implemented. Moreover, the proposed algorithm is well applied to the design of digital filters. Computed results illustrate the good performance of the proposed algorithm.


2013 ◽  
Vol 816-817 ◽  
pp. 1154-1157
Author(s):  
Xu Yin ◽  
Ai Min Ji

To solve problems that exist in optimal design such as falling into local optimal solution easily and low efficiency in collaborative optimization, a new mix strategy optimization method combined design of experiments (DOE) with gradient optimization (GO) was proposed. In order to reduce the effect on the result of optimization made by the designers decision, DOE for preliminary analysis of the function model was used, and the optimal values obtained in DOE stage was taken as the initial values of design variables in GO stage in the new optimization method. The reducer MDO problem was taken as a example to confirm the global degree, efficiency, and accuracy of the method. The results show the optimization method could not only avoid falling into local solution, but also have an obvious superiority in treating the complex collaborative optimization problems.


Author(s):  
Krupakaran Ravichandran ◽  
Nafiseh Masoudi ◽  
Georges M. Fadel ◽  
Margaret M. Wiecek

Abstract Parametric Optimization is used to solve problems that have certain design variables as implicit functions of some independent input parameters. The optimal solutions and optimal objective function values are provided as functions of the input parameters for the entire parameter space of interest. Since exact solutions are available merely for parametric optimization problems that are linear or convex-quadratic, general non-convex non-linear problems require approximations. In the present work, we apply three parametric optimization algorithms to solve a case study of a benchmark structural design problem. The algorithms first approximate the nonlinear constraint(s) and then solve the optimization problem. The accuracy of their results and their computational performance are then compared to identify a suitable algorithm for structural design applications. Using the identified method, sizing optimization of a truss structure for varying load conditions such as a varying load direction is considered and solved as a parametric optimization problem to evaluate the performance of the identified algorithm. The results are also compared with non-parametric optimization to assess the accuracy of the solution and computational performance of the two methods.


Author(s):  
Georg Thierauf ◽  
Jianbo Cai

Abstract A method for the solution of mixed-discrete structural optimization problems based on a two level parallel evolution strategy is presented. On the first level, the optimization problem is divided into two subproblems with discrete and continuous design variables, respectively. The two subproblems are solved simultaneously on a parallel computing architecture. On the second level, each subproblem is further parallelized by means of a parallel sub-evolution-strategy. Periodically, the design variables in the two groups axe exchanged. Examples are included to demonstrate the implementation of this method on a 8 nodes parallel computer.


Author(s):  
Jiantao Liu ◽  
Hae Chang Gea ◽  
Ping An Du

Robust structural design optimization with non-probabilistic uncertainties is often formulated as a two-level optimization problem. The top level optimization problem is simply to minimize a specified objective function while the optimized solution at the second level solution is within bounds. The second level optimization problem is to find the worst case design under non-probabilistic uncertainty. Although the second level optimization problem is a non-convex problem, the global optimal solution must be assured in order to guarantee the solution robustness at the first level. In this paper, a new approach is proposed to solve the robust structural optimization problems with non-probabilistic uncertainties. The WCDO problems at the second level are solved directly by the monotonocity analysis and the global optimality is assured. Then, the robust structural optimization problem is reduced to a single level problem and can be easily solved by any gradient based method. To illustrate the proposed approach, truss examples with non-probabilistic uncertainties on stiffness and loading are presented.


Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 377
Author(s):  
Nimit Nimana

In this work, we consider a bilevel optimization problem consisting of the minimizing sum of two convex functions in which one of them is a composition of a convex function and a nonzero linear transformation subject to the set of all feasible points represented in the form of common fixed-point sets of nonlinear operators. To find an optimal solution to the problem, we present a fixed-point subgradient splitting method and analyze convergence properties of the proposed method provided that some additional assumptions are imposed. We investigate the solving of some well known problems by using the proposed method. Finally, we present some numerical experiments for showing the effectiveness of the obtained theoretical result.


Sign in / Sign up

Export Citation Format

Share Document