A New Deterministic Approach Using Sensitivity Region Measures for Multi-Objective Robust and Feasibility Robust Design Optimization

2005 ◽  
Vol 128 (4) ◽  
pp. 874-883 ◽  
Author(s):  
Mian Li ◽  
Shapour Azarm ◽  
Art Boyars

We present a deterministic non-gradient based approach that uses robustness measures in multi-objective optimization problems where uncontrollable parameter variations cause variation in the objective and constraint values. The approach is applicable for cases that have discontinuous objective and constraint functions with respect to uncontrollable parameters, and can be used for objective or feasibility robust optimization, or both together. In our approach, the known parameter tolerance region maps into sensitivity regions in the objective and constraint spaces. The robustness measures are indices calculated, using an optimizer, from the sizes of the acceptable objective and constraint variation regions and from worst-case estimates of the sensitivity regions’ sizes, resulting in an outer-inner structure. Two examples provide comparisons of the new approach with a similar published approach that is applicable only with continuous functions. Both approaches work well with continuous functions. For discontinuous functions the new approach gives solutions near the nominal Pareto front; the earlier approach does not.

Author(s):  
Mian Li ◽  
Shapour Azarm ◽  
Art Boyars

We present a deterministic, non-gradient based approach that uses robustness measures for robust optimization in multi-objective problems where uncontrollable parameters variations cause variation in the objective and constraint values. The approach is applicable for cases with discontinuous objective and constraint functions, and can be used for objective or feasibility robust optimization, or both together. In our approach, the parameter tolerance region maps into sensitivity regions in the objective and constraint spaces. The robustness measures are indices calculated, using an optimizer, from the sizes of the acceptable objective and constraint variation regions and from worst-case estimates of the sensitivity regions’ sizes, resulting in an outer-inner structure. Two examples provide comparisons of the new approach with a similar published approach that is applicable only with continuous functions. Both approaches work well with continuous functions. For discontinuous functions the new approach gives solutions near the nominal Pareto front; the earlier approach does not.


Author(s):  
Tingting Xia ◽  
Mian Li

Abstract Multi-objective optimization problems (MOOPs) with uncertainties are common in engineering design. To find robust Pareto fronts, multi-objective robust optimization (MORO) methods with inner–outer optimization structures usually have high computational complexity, which is a critical issue. Generally, in design problems, robust Pareto solutions lie somewhere closer to nominal Pareto points compared with randomly initialized points. The searching process for robust solutions could be more efficient if starting from nominal Pareto points. We propose a new method sequentially approaching to the robust Pareto front (SARPF) from the nominal Pareto points where MOOPs with uncertainties are solved in two stages. The deterministic optimization problem and robustness metric optimization are solved in the first stage, where nominal Pareto solutions and the robust-most solutions are identified, respectively. In the second stage, a new single-objective robust optimization problem is formulated to find the robust Pareto solutions starting from the nominal Pareto points in the region between the nominal Pareto front and robust-most points. The proposed SARPF method can reduce a significant amount of computational time since the optimization process can be performed in parallel at each stage. Vertex estimation is also applied to approximate the worst-case uncertain parameter values, which can reduce computational efforts further. The global solvers, NSGA-II for multi-objective cases and genetic algorithm (GA) for single-objective cases, are used in corresponding optimization processes. Three examples with the comparison with results from the previous method are presented to demonstrate the applicability and efficiency of the proposed method.


Author(s):  
Todd Letcher ◽  
M.-H. Herman Shen

A multi-objective robust optimization framework that incorporates a robustness index for each objective has been developed in a bi-level approach. The top level of the framework consists of the standard optimization problem formulation with the addition of a robustness constraint. The bottom level uses the Worst Case Sensitivity Region (WCSR) concept previously developed to solve single objective robust optimization problems. In this framework, a separate robustness index for each objective allows the designer to choose the importance of each objective. The method is demonstrated on a commonly studied two-bar truss structural optimization problem. The results of the problem demonstrate the effectiveness and usefulness of the multiple robustness index capabilities added to this framework. A multi-objective genetic algorithm, NSGA-II, is used in both levels of the framework.


Author(s):  
Tingting Xia ◽  
Mian Li

Abstract Multi-objective optimization problems (MOOPs) with uncertainties are common in engineering design problems. To find the robust Pareto fronts, multi-objective robust optimization methods with inner-outer optimization structures generally have high computational complexity, which is always an important issue to address. Based on the general experience, robust Pareto solutions usually lie somewhere near the nominal Pareto points. Starting from the obtained nominal Pareto points, the search process for robust solutions could be more efficient. In this paper, we propose a method that sequentially approaching to the robust Pareto front (SARPF) from the nominal Pareto points. MOOPs are solved by the SARPF in two optimization stages. The deterministic optimization problem and the robustness metric optimization problem are solved in the first stage, and nominal Pareto solutions and the robust-most solutions can be found respectively. In the second stage, a new single-objective robust optimization problem is formulated to find the robust Pareto solutions starting from the nominal Pareto points in the region between the nominal Pareto front and the robust-most points. The proposed SARPF method can save a significant amount of computation time since the optimization process can be performed in parallel at each stage. Vertex estimation is also applied to approximate the worst-case uncertain parameter values which can save computational efforts further. The global solvers, NSGA-II for the multi-objective case and genetic algorithm (GA) for the single-objective case, are used in corresponding optimization processes. Two examples with comparison to a previous method are presented for the applicability and efficiency demonstration.


Author(s):  
Zhenkun Wang ◽  
Qingyan Li ◽  
Qite Yang ◽  
Hisao Ishibuchi

AbstractIt has been acknowledged that dominance-resistant solutions (DRSs) extensively exist in the feasible region of multi-objective optimization problems. Recent studies show that DRSs can cause serious performance degradation of many multi-objective evolutionary algorithms (MOEAs). Thereafter, various strategies (e.g., the $$\epsilon $$ ϵ -dominance and the modified objective calculation) to eliminate DRSs have been proposed. However, these strategies may in turn cause algorithm inefficiency in other aspects. We argue that these coping strategies prevent the algorithm from obtaining some boundary solutions of an extremely convex Pareto front (ECPF). That is, there is a dilemma between eliminating DRSs and preserving boundary solutions of the ECPF. To illustrate such a dilemma, we propose a new multi-objective optimization test problem with the ECPF as well as DRSs. Using this test problem, we investigate the performance of six representative MOEAs in terms of boundary solutions preservation and DRS elimination. The results reveal that it is quite challenging to distinguish between DRSs and boundary solutions of the ECPF.


Author(s):  
Weijun Wang ◽  
Stéphane Caro ◽  
Fouad Bennis ◽  
Oscar Brito Augusto

For Multi-Objective Robust Optimization Problem (MOROP), it is important to obtain design solutions that are both optimal and robust. To find these solutions, usually, the designer need to set a threshold of the variation of Performance Functions (PFs) before optimization, or add the effects of uncertainties on the original PFs to generate a new Pareto robust front. In this paper, we divide a MOROP into two Multi-Objective Optimization Problems (MOOPs). One is the original MOOP, another one is that we take the Robustness Functions (RFs), robust counterparts of the original PFs, as optimization objectives. After solving these two MOOPs separately, two sets of solutions come out, namely the Pareto Performance Solutions (PP) and the Pareto Robustness Solutions (PR). Make a further development on these two sets, we can get two types of solutions, namely the Pareto Robustness Solutions among the Pareto Performance Solutions (PR(PP)), and the Pareto Performance Solutions among the Pareto Robustness Solutions (PP(PR)). Further more, the intersection of PR(PP) and PP(PR) can represent the intersection of PR and PP well. Then the designer can choose good solutions by comparing the results of PR(PP) and PP(PR). Thanks to this method, we can find out the optimal and robust solutions without setting the threshold of the variation of PFs nor losing the initial Pareto front. Finally, an illustrative example highlights the contributions of the paper.


Author(s):  
Eliot Rudnick-Cohen

Abstract Multi-objective decision making problems can sometimes involve an infinite number of objectives. In this paper, an approach is presented for solving multi-objective optimization problems containing an infinite number of parameterized objectives, termed “infinite objective optimization”. A formulation is given for infinite objective optimization problems and an approach for checking whether a Pareto frontier is a solution to this formulation is detailed. Using this approach, a new sampling based approach is developed for solving infinite objective optimization problems. The new approach is tested on several different example problems and is shown to be faster and perform better than a brute force approach.


2014 ◽  
Vol 945-949 ◽  
pp. 2241-2247
Author(s):  
De Gao Zhao ◽  
Qiang Li

This paper deals with application of Non-dominated Sorting Genetic Algorithm with elitism (NSGA-II) to solve multi-objective optimization problems of designing a vehicle-borne radar antenna pedestal. Five technical improvements are proposed due to the disadvantages of NSGA-II. They are as follow: (1) presenting a new method to calculate the fitness of individuals in population; (2) renewing the definition of crowding distance; (3) introducing a threshold for choosing elitist; (4) reducing some redundant sorting process; (5) developing a self-adaptive arithmetic cross and mutation probability. The modified algorithm can lead to better population diversity than the original NSGA-II. Simulation results prove rationality and validity of the modified NSGA-II. A uniformly distributed Pareto front can be obtained by using the modified NSGA-II. Finally, a multi-objective problem of designing a vehicle-borne radar antenna pedestal is settled with the modified algorithm.


Author(s):  
Alexandre Medi ◽  
◽  
Tenda Okimoto ◽  
Katsumi Inoue ◽  
◽  
...  

A Distributed Constraint Optimization Problem (DCOP) is a fundamental problem that can formalize various applications related to multi-agent cooperation. Many application problems in multi-agent systems can be formalized as DCOPs. However, many real world optimization problems involve multiple criteria that should be considered separately and optimized simultaneously. A Multi-Objective Distributed Constraint Optimization Problem (MO-DCOP) is an extension of a mono-objective DCOP. Compared to DCOPs, there exists few works on MO-DCOPs. In this paper, we develop a novel complete algorithm for solving an MO-DCOP. This algorithm utilizes a widely used method called Pareto Local Search (PLS) to generate an approximation of the Pareto front. Then, the obtained information is used to guide the search thresholds in a Branch and Bound algorithm. In the evaluations, we evaluate the runtime of our algorithm and show empirically that using a Pareto front approximation obtained by a PLS algorithm allows to significantly speed-up the search in a Branch and Bound algorithm.


Author(s):  
Jesper Kristensen ◽  
You Ling ◽  
Isaac Asher ◽  
Liping Wang

Adaptive sampling methods have been used to build accurate meta-models across large design spaces from which engineers can explore data trends, investigate optimal designs, study the sensitivity of objectives on the modeling design features, etc. For global design optimization applications, adaptive sampling methods need to be extended to sample more efficiently near the optimal domains of the design space (i.e., the Pareto front/frontier in multi-objective optimization). Expected Improvement (EI) methods have been shown to be efficient to solve design optimization problems using meta-models by incorporating prediction uncertainty. In this paper, a set of state-of-the-art methods (hypervolume EI method and centroid EI method) are presented and implemented for selecting sampling points for multi-objective optimizations. The classical hypervolume EI method uses hyperrectangles to represent the Pareto front, which shows undesirable behavior at the tails of the Pareto front. This issue is addressed utilizing the concepts from physical programming to shape the Pareto front. The modified hypervolume EI method can be extended to increase local Pareto front accuracy in any area identified by an engineer, and this method can be applied to Pareto frontiers of any shape. A novel hypervolume EI method is also developed that does not rely on the assumption of hyperrectangles, but instead assumes the Pareto frontier can be represented by a convex hull. The method exploits fast methods for convex hull construction and numerical integration, and results in a Pareto front shape that is desired in many practical applications. Various performance metrics are defined in order to quantitatively compare and discuss all methods applied to a particular 2D optimization problem from the literature. The modified hypervolume EI methods lead to dramatic resource savings while improving the predictive capabilities near the optimal objective values.


Sign in / Sign up

Export Citation Format

Share Document