Design strategies for multi-objective optimization of aerodynamic surfaces

2017 ◽  
Vol 34 (5) ◽  
pp. 1724-1753 ◽  
Author(s):  
Anand Amrit ◽  
Leifur Leifsson ◽  
Slawomir Koziel

Purpose This paper aims to investigates several design strategies to solve multi-objective aerodynamic optimization problems using high-fidelity simulations. The purpose is to find strategies which reduce the overall optimization time while still maintaining accuracy at the high-fidelity level. Design/methodology/approach Design strategies are proposed that use an algorithmic framework composed of search space reduction, fast surrogate models constructed using a combination of physics-based surrogates and kriging and global refinement of the Pareto front with co-kriging. The strategies either search the full or reduced design space with a low-fidelity model or a physics-based surrogate. Findings Numerical investigations of airfoil shapes in two-dimensional transonic flow are used to characterize and compare the strategies. The results show that searching a reduced design space produces the same Pareto front as when searching the full space. Moreover, as the reduced space is two orders of magnitude smaller (volume-wise), the number of required samples to setup the surrogates can be reduced by an order of magnitude. Consequently, the computational time is reduced from over three days to less than half a day. Originality/value The proposed design strategies are novel and holistic. The strategies render multi-objective design of aerodynamic surfaces using high-fidelity simulation data in moderately sized search spaces computationally tractable.

Author(s):  
Tingting Xia ◽  
Mian Li

Abstract Multi-objective optimization problems (MOOPs) with uncertainties are common in engineering design. To find robust Pareto fronts, multi-objective robust optimization (MORO) methods with inner–outer optimization structures usually have high computational complexity, which is a critical issue. Generally, in design problems, robust Pareto solutions lie somewhere closer to nominal Pareto points compared with randomly initialized points. The searching process for robust solutions could be more efficient if starting from nominal Pareto points. We propose a new method sequentially approaching to the robust Pareto front (SARPF) from the nominal Pareto points where MOOPs with uncertainties are solved in two stages. The deterministic optimization problem and robustness metric optimization are solved in the first stage, where nominal Pareto solutions and the robust-most solutions are identified, respectively. In the second stage, a new single-objective robust optimization problem is formulated to find the robust Pareto solutions starting from the nominal Pareto points in the region between the nominal Pareto front and robust-most points. The proposed SARPF method can reduce a significant amount of computational time since the optimization process can be performed in parallel at each stage. Vertex estimation is also applied to approximate the worst-case uncertain parameter values, which can reduce computational efforts further. The global solvers, NSGA-II for multi-objective cases and genetic algorithm (GA) for single-objective cases, are used in corresponding optimization processes. Three examples with the comparison with results from the previous method are presented to demonstrate the applicability and efficiency of the proposed method.


Author(s):  
Brandon Brown ◽  
Tarunraj Singh ◽  
Rahul Rai

This paper presents a method to identify the exact Pareto front for a multi-objective optimization problem. The developed technique addresses the identification of the Pareto frontier in the cost space and the Pareto set in the design space for both constrained and unconstrained optimization problems. The proposed approach identifies a n – 1 dimensional hypersurface for a multi-objective problem with n cost functions, a subset of which constitute the Pareto front. The n – 1 dimensional hypersurface is identified by enforcing a singularity constraint on the Jacobian of the cost vector with respect to the optimization parameters. Since the boundary is identified in the design space, the relation of design points to the exact Pareto front in the cost space is known. The proposed method is proven effective in the Pareto identification for a set of previously released challenge problems. Six of these examples are included in this paper; 3 unconstrained and 3 constrained.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 19
Author(s):  
Saúl Zapotecas-Martínez ◽  
Abel García-Nájera ◽  
Adriana Menchaca-Méndez

One of the major limitations of evolutionary algorithms based on the Lebesgue measure for multi-objective optimization is the computational cost required to approximate the Pareto front of a problem. Nonetheless, the Pareto compliance property of the Lebesgue measure makes it one of the most investigated indicators in the designing of indicator-based evolutionary algorithms (IBEAs). The main deficiency of IBEAs that use the Lebesgue measure is their computational cost which increases with the number of objectives of the problem. On this matter, the investigation presented in this paper introduces an evolutionary algorithm based on the Lebesgue measure to deal with box-constrained continuous multi-objective optimization problems. The proposed algorithm implicitly uses the regularity property of continuous multi-objective optimization problems that has suggested effectiveness when solving continuous problems with rough Pareto sets. On the other hand, the survival selection mechanism considers the local property of the Lebesgue measure, thus reducing the computational time in our algorithmic approach. The emerging indicator-based evolutionary algorithm is examined and compared versus three state-of-the-art multi-objective evolutionary algorithms based on the Lebesgue measure. In addition, we validate its performance on a set of artificial test problems with various characteristics, including multimodality, separability, and various Pareto front forms, incorporating concavity, convexity, and discontinuity. For a more exhaustive study, the proposed algorithm is evaluated in three real-world applications having four, five, and seven objective functions whose properties are unknown. We show the high competitiveness of our proposed approach, which, in many cases, improved the state-of-the-art indicator-based evolutionary algorithms on the multi-objective problems adopted in our investigation.


Author(s):  
Shahrokh Shahpar ◽  
David Giacche ◽  
Leigh Lapworth

This paper describes the development of an automated design optimization system that makes use of a high fidelity Reynolds-Averaged CFD analysis procedure to minimize the fan forcing and fan BOGV (bypass outlet guide vane) losses simultaneously taking into the account the down-stream pylon and RDF (radial drive fairing) distortions. The design space consists of the OGV’s stagger angle, trailing-edge recambering, axial and circumferential positions leading to a variable pitch optimum design. An advanced optimization system called SOFT (Smart Optimisation for Turbomachinery) was used to integrate a number of pre-processor, simulation and in-house grid generation codes and postprocessor programs. A number of multi-objective, multi-point optimiztion were carried out by SOFT on a cluster of workstations and are reported herein.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ramazan Özkan ◽  
Mustafa Serdar Genç

Purpose Wind turbines are one of the best candidates to solve the problem of increasing energy demand in the world. The aim of this paper is to apply a multi-objective structural optimization study to a Phase II wind turbine blade produced by the National Renewable Energy Laboratory to obtain a more efficient small-scale wind turbine. Design/methodology/approach To solve this structural optimization problem, a new Non-Dominated Sorting Genetic Algorithm (NSGA-II) was performed. In the optimization study, the objective function was on minimization of mass and cost of the blade, and design parameters were composite material type and spar cap layer number. Design constraints were deformation, strain, stress, natural frequency and failure criteria. ANSYS Composite PrepPost (ACP) module was used to model the composite materials of the blade. Moreover, fluid–structure interaction (FSI) model in ANSYS was used to carry out flow and structural analysis on the blade. Findings As a result, a new original blade was designed using the multi-objective structural optimization study which has been adapted for aerodynamic optimization, the NSGA-II algorithm and FSI. The mass of three selected optimized blades using carbon composite decreased as much as 6.6%, 11.9% and 14.3%, respectively, while their costs increased by 23.1%, 29.9% and 38.3%. This multi-objective structural optimization-based study indicates that the composite configuration of the blade could be altered to reach the desired weight and cost for production. Originality/value ACP module is a novel and advanced composite modeling technique. This study is a novel study to present the NSGA-II algorithm, which has been adapted for aerodynamic optimization, together with the FSI. Unlike other studies, complex composite layup, fiber directions and layer orientations were defined by using the ACP module, and the composite blade analyzed both aerodynamic pressure and structural design using ACP and FSI modules together.


Author(s):  
Zhenkun Wang ◽  
Qingyan Li ◽  
Qite Yang ◽  
Hisao Ishibuchi

AbstractIt has been acknowledged that dominance-resistant solutions (DRSs) extensively exist in the feasible region of multi-objective optimization problems. Recent studies show that DRSs can cause serious performance degradation of many multi-objective evolutionary algorithms (MOEAs). Thereafter, various strategies (e.g., the $$\epsilon $$ ϵ -dominance and the modified objective calculation) to eliminate DRSs have been proposed. However, these strategies may in turn cause algorithm inefficiency in other aspects. We argue that these coping strategies prevent the algorithm from obtaining some boundary solutions of an extremely convex Pareto front (ECPF). That is, there is a dilemma between eliminating DRSs and preserving boundary solutions of the ECPF. To illustrate such a dilemma, we propose a new multi-objective optimization test problem with the ECPF as well as DRSs. Using this test problem, we investigate the performance of six representative MOEAs in terms of boundary solutions preservation and DRS elimination. The results reveal that it is quite challenging to distinguish between DRSs and boundary solutions of the ECPF.


Author(s):  
Weijun Wang ◽  
Stéphane Caro ◽  
Fouad Bennis ◽  
Oscar Brito Augusto

For Multi-Objective Robust Optimization Problem (MOROP), it is important to obtain design solutions that are both optimal and robust. To find these solutions, usually, the designer need to set a threshold of the variation of Performance Functions (PFs) before optimization, or add the effects of uncertainties on the original PFs to generate a new Pareto robust front. In this paper, we divide a MOROP into two Multi-Objective Optimization Problems (MOOPs). One is the original MOOP, another one is that we take the Robustness Functions (RFs), robust counterparts of the original PFs, as optimization objectives. After solving these two MOOPs separately, two sets of solutions come out, namely the Pareto Performance Solutions (PP) and the Pareto Robustness Solutions (PR). Make a further development on these two sets, we can get two types of solutions, namely the Pareto Robustness Solutions among the Pareto Performance Solutions (PR(PP)), and the Pareto Performance Solutions among the Pareto Robustness Solutions (PP(PR)). Further more, the intersection of PR(PP) and PP(PR) can represent the intersection of PR and PP well. Then the designer can choose good solutions by comparing the results of PR(PP) and PP(PR). Thanks to this method, we can find out the optimal and robust solutions without setting the threshold of the variation of PFs nor losing the initial Pareto front. Finally, an illustrative example highlights the contributions of the paper.


Author(s):  
Marco Baldan ◽  
Alexander Nikanorov ◽  
Bernard Nacke

Purpose Reliable modeling of induction hardening requires a multi-physical approach, which makes it time-consuming. In designing an induction hardening system, combining such model with an optimization technique allows managing a high number of design variables. However, this could lead to a tremendous overall computational cost. This paper aims to reduce the computational time of an optimal design problem by making use of multi-fidelity modeling and parallel computing. Design/methodology/approach In the multi-fidelity framework, the “high-fidelity” model couples the electromagnetic, thermal and metallurgical fields. It predicts the phase transformations during both the heating and cooling stages. The “low-fidelity” model is instead limited to the heating step. Its inaccuracy is counterbalanced by its cheapness, which makes it suitable for exploring the design space in optimization. Then, the use of co-Kriging allows merging information from different fidelity models and predicting good design candidates. Field evaluations of both models occur in parallel. Findings In the design of an induction heating system, the synergy between the “high-fidelity” and “low-fidelity” model, together with use of surrogates and parallel computing could reduce up to one order of magnitude the overall computational cost. Practical implications On one hand, multi-physical modeling of induction hardening implies a better understanding of the process, resulting in further potential process improvements. On the other hand, the optimization technique could be applied to many other computationally intensive real-life problems. Originality/value This paper highlights how parallel multi-fidelity optimization could be used in designing an induction hardening system.


2019 ◽  
Vol 13 (4) ◽  
pp. 804-827 ◽  
Author(s):  
Achala Jain ◽  
Anupama P. Huddar

Purpose The purpose of this paper is to solve economic emission dispatch problem in connection of wind with hydro-thermal units. Design/methodology/approach The proposed hybrid methodology is the joined execution of both the modified salp swarm optimization algorithm (MSSA) with artificial intelligence technique aided with particle swarm optimization (PSO) technique. Findings The proposed approach is introduced to figure out the optimal power generated power from the thermal, wind farms and hydro units by minimizing the emission level and cost of generation simultaneously. The best compromise solution of the generation power outputs and related gas emission are subject to the equality and inequality constraints of the system. Here, MSSA is used to generate the optimal combination of thermal generator with the objective of minimum fuel and emission objective function. The proposed method also considers wind speed probability factor via PSO-artificial neural network (ANN) technique and hydro power generation at peak load demand condition to ensure economic utilization. Originality/value To validate the advantage of the proposed approach, six- and ten-units thermal systems are studied with fuel and emission cost. For minimizing the fuel and emission cost of the thermal system with the predicted wind speed factor, the proposed approach is used. The proposed approach is actualized in MATLAB/Simulink, and the results are examined with considering generation units and compared with various solution techniques. The comparison reveals the closeness of the proposed approach and proclaims its capability for handling multi-objective optimization problems of power systems.


2005 ◽  
Vol 128 (4) ◽  
pp. 874-883 ◽  
Author(s):  
Mian Li ◽  
Shapour Azarm ◽  
Art Boyars

We present a deterministic non-gradient based approach that uses robustness measures in multi-objective optimization problems where uncontrollable parameter variations cause variation in the objective and constraint values. The approach is applicable for cases that have discontinuous objective and constraint functions with respect to uncontrollable parameters, and can be used for objective or feasibility robust optimization, or both together. In our approach, the known parameter tolerance region maps into sensitivity regions in the objective and constraint spaces. The robustness measures are indices calculated, using an optimizer, from the sizes of the acceptable objective and constraint variation regions and from worst-case estimates of the sensitivity regions’ sizes, resulting in an outer-inner structure. Two examples provide comparisons of the new approach with a similar published approach that is applicable only with continuous functions. Both approaches work well with continuous functions. For discontinuous functions the new approach gives solutions near the nominal Pareto front; the earlier approach does not.


Sign in / Sign up

Export Citation Format

Share Document