pareto front
Recently Published Documents


TOTAL DOCUMENTS

657
(FIVE YEARS 217)

H-INDEX

33
(FIVE YEARS 4)

2022 ◽  
pp. 3144-3167
Author(s):  
Ivor van der Hoog ◽  
Irina Kostitsyna ◽  
Maarten Löffler ◽  
Bettina Speckmann
Keyword(s):  

2021 ◽  
Author(s):  
Saykat Dutta ◽  
Rammohan Mallipeddi ◽  
Kedar Nath Das

Abstract In the last decade, numerous Multi/Many-Objective Evolutionary Algorithms (MOEAs) have been proposed to handle Multi/Many-Objective Problems (MOPs) with challenges such as discontinuous Pareto Front (PF), degenerate PF, etc. MOEAs in the literature can be broadly divided into three categories based on the selection strategy employed such as dominance, decomposition, and indicator-based MOEAs. Each category of MOEAs have their advantages and disadvantages when solving MOPs with diverse characteristics. In this work, we propose a Hybrid Selection based MOEA, referred to as HS-MOEA, which is a simple yet effective hybridization of dominance, decomposition and indicator-based concepts. In other words, we propose a new environmental selection strategy where the Pareto-dominance, reference vectors and an indicator are combined to effectively balance the diversity and convergence properties of MOEA during the evolution. The superior performance of HS-MOEA compared to the state-of-the-art MOEAs is demonstrated through experimental simulations on DTLZ and WFG test suites with up to 10 objectives.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 19
Author(s):  
Saúl Zapotecas-Martínez ◽  
Abel García-Nájera ◽  
Adriana Menchaca-Méndez

One of the major limitations of evolutionary algorithms based on the Lebesgue measure for multi-objective optimization is the computational cost required to approximate the Pareto front of a problem. Nonetheless, the Pareto compliance property of the Lebesgue measure makes it one of the most investigated indicators in the designing of indicator-based evolutionary algorithms (IBEAs). The main deficiency of IBEAs that use the Lebesgue measure is their computational cost which increases with the number of objectives of the problem. On this matter, the investigation presented in this paper introduces an evolutionary algorithm based on the Lebesgue measure to deal with box-constrained continuous multi-objective optimization problems. The proposed algorithm implicitly uses the regularity property of continuous multi-objective optimization problems that has suggested effectiveness when solving continuous problems with rough Pareto sets. On the other hand, the survival selection mechanism considers the local property of the Lebesgue measure, thus reducing the computational time in our algorithmic approach. The emerging indicator-based evolutionary algorithm is examined and compared versus three state-of-the-art multi-objective evolutionary algorithms based on the Lebesgue measure. In addition, we validate its performance on a set of artificial test problems with various characteristics, including multimodality, separability, and various Pareto front forms, incorporating concavity, convexity, and discontinuity. For a more exhaustive study, the proposed algorithm is evaluated in three real-world applications having four, five, and seven objective functions whose properties are unknown. We show the high competitiveness of our proposed approach, which, in many cases, improved the state-of-the-art indicator-based evolutionary algorithms on the multi-objective problems adopted in our investigation.


Computation ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 137
Author(s):  
Walter Gil-González ◽  
Oscar Danilo Montoya ◽  
Luis Fernando Grisales-Noreña ◽  
Andrés Escobar-Mejía

This paper deals with the multi-objective operation of battery energy storage systems (BESS) in AC distribution systems using a convex reformulation. The objective functions are CO2 emissions, and the costs of the daily energy losses are considered. The conventional non-linear nonconvex branch multi-period optimal power flow model is reformulated with a second-order cone programming (SOCP) model, which ensures finding the global optimum for each point present in the Pareto front. The weighting factors methodology is used to convert the multi-objective model into a convex single-objective model, which allows for finding the optimal Pareto front using an iterative search. Two operational scenarios regarding BESS are considered: (i) a unity power factor operation and (ii) a variable power factor operation. The numerical results demonstrate that including the reactive power capabilities in BESS reduces 200kg of CO2 emissions and USD 80 per day of operation. All of the numerical validations were developed in MATLAB 2020b with the CVX tool and the SEDUMI and SDPT3 solvers.


2021 ◽  
Author(s):  
Carlo Cristiano Stabile ◽  
Marco Barbiero ◽  
Giorgio Fighera ◽  
Laura Dovera

Abstract Optimizing well locations for a green field is critical to mitigate development risks. Performing such workflows with reservoir simulations is very challenging due to the huge computational cost. Proxy models can instead provide accurate estimates at a fraction of the computing time. This study presents an application of new generation functional proxies to optimize the well locations in a real oil field with respect to the actualized oil production on all the different geological realizations. Proxies are built with the Universal Trace Kriging and are functional in time allowing to actualize oil flows over the asset lifetime. Proxies are trained on the reservoir simulations using randomly sampled well locations. Two proxies are created for a pessimistic model (P10) and a mid-case model (P50) to capture the geological uncertainties. The optimization step uses the Non-dominated Sorting Genetic Algorithm, with discounted oil productions of the two proxies, as objective functions. An adaptive approach was employed: optimized points found from a first optimization were used to re-train the proxy models and a second run of optimization was performed. The methodology was applied on a real oil reservoir to optimize the location of four vertical production wells and compared against reference locations. 111 geological realizations were available, in which one relevant uncertainty is the presence of possible compartments. The decision space represented by the horizontal translation vectors for each well was sampled using Plackett-Burman and Latin-Hypercube designs. A first application produced a proxy with poor predictive quality. Redrawing the areas to avoid overlaps and to confine the decision space of each well in one compartment, improved the quality. This suggests that the proxy predictive ability deteriorates in presence of highly non-linear responses caused by sealing faults or by well interchanging positions. We then followed a 2-step adaptive approach: a first optimization was performed and the resulting Pareto front was validated with reservoir simulations; to further improve the proxy quality in this region of the decision space, the validated Pareto front points were added to the initial dataset to retrain the proxy and consequently rerun the optimization. The final well locations were validated on all 111 realizations with reservoir simulations and resulted in an overall increase of the discounted production of about 5% compared to the reference development strategy. The adaptive approach, combined with functional proxy, proved to be successful in improving the workflow by purposefully increasing the training set samples with data points able to enhance the optimization step effectiveness. Each optimization run performed relies on about 1 million proxy evaluations which required negligible computational time. The same workflow carried out with standard reservoir simulations would have been practically unfeasible.


2021 ◽  
Author(s):  
◽  
Atiya Masood

<p>The Job Shop Scheduling (JSS) problem is considered to be a challenging one due to practical requirements such as multiple objectives and the complexity of production flows. JSS has received great attention because of its broad applicability in real-world situations. One of the prominent solutions approaches to handling JSS problems is to design effective dispatching rules. Dispatching rules are investigated broadly in both academic and industrial environments because they are easy to implement (by computers and shop floor operators) with a low computational cost. However, the manual development of dispatching rules is time-consuming and requires expert knowledge of the scheduling environment. The hyper-heuristic approach that uses genetic programming (GP) to solve JSS problems is known as GP-based hyper-heuristic (GP-HH). GP-HH is a very useful approach for discovering dispatching rules automatically.  Although it is technically simple to consider only a single objective optimization for JSS, it is now widely evidenced in the literature that JSS by nature presents several potentially conflicting objectives, including the maximal flowtime, mean flowtime, and mean tardiness. A few studies in the literature attempt to solve many-objective JSS with more than three objectives, but existing studies have some major limitations. First, many-objective JSS problems have been solved by multi-objective evolutionary algorithms (MOEAs). However, recent studies have suggested that the performance of conventional MOEAs is prone to the scalability challenge and degrades dramatically with many-objective optimization problems (MaOPs). Many-objective JSS using MOEAs inherit the same challenge as MaOPs. Thus, using MOEAs for many-objective JSS problems often fails to select quality dispatching rules. Second, although the reference points method is one of the most prominent and efficient methods for diversity maintenance in many-objective problems, it uses a uniform distribution of reference points which is only appropriate for a regular Pareto-front. However, JSS problems often have irregular Pareto-front and uniformly distributed reference points do not match well with the irregular Pareto-front. It results in many useless points during evolution. These useless points can significantly affect the performance of the reference points-based algorithms. They cannot help to enhance the solution diversity of evolved Pareto-front in many-objective JSS problems. Third, Pareto Local Search (PLS) is a prominent and effective local search method for handling multi-objective JSS optimization problems but the literature does not discover any existing studies which use PLS in GP-HH.  To address these limitations, this thesis's overall goal is to develop GP-HH approaches to evolving effective rules to handle many conflicting objectives simultaneously in JSS problems.  To achieve the first goal, this thesis proposes the first many-objective GP-HH method for JSS problems to find the Pareto-fronts of nondominated dispatching rules. Decision-makers can utilize this GP-HH method for selecting appropriate rules based on their preference over multiple conflicting objectives. This study combines GP with the fitness evaluation scheme of a many-objective reference points-based approach. The experimental results show that the proposed algorithm significantly outperforms MOEAs such as NSGA-II and SPEA2.  To achieve the second goal, this thesis proposes two adaptive reference point approaches (model-free and model-driven). In both approaches, the reference points are generated according to the distribution of the evolved dispatching rules. The model-free reference point adaptation approach is inspired by Particle Swarm Optimization (PSO). The model-driven approach constructs the density model and estimates the density of solutions from each defined sub-location in a whole objective space. Furthermore, the model-driven approach provides smoothness to the model by applying a Gaussian Process model and calculating the area under the mean function. The mean function area helps to find the required number of the reference points in each mean function. The experimental results demonstrate that both adaptive approaches are significantly better than several state-of-the-art MOEAs.  To achieve the third goal, the thesis proposes the first algorithm that combines GP as a global search with PLS as a local search in many-objective JSS. The proposed algorithm introduces an effective fitness-based selection strategy for selecting initial individuals for neighborhood exploration. It defines the GP's proper neighborhood structure and a new selection mechanism for selecting the effective dispatching rules during the local search. The experimental results on the JSS benchmark problem show that the newly proposed algorithm can significantly outperform its baseline algorithm (GP-NSGA-III).</p>


2021 ◽  
Author(s):  
Javier Eusebio Gomez ◽  
Marcelo Robles ◽  
Cristian Di Giuseppe ◽  
Federico Galliano ◽  
Jeronimo Centineo ◽  
...  

Abstract This paper presents the process and results of the application of Data Physics to optimize production of a mature field in the Gulf of San Jorge Basin in Argentina. Data Physics is a novel technology that blends the reservoir physics (black oil) used in traditional numerical simulation with machine learning and advanced optimization techniques. Data Physics was described in detail in a prior paper (Sarma, et al SPE-185507-MS) as a physics-based modeling approach augmented by machine learning. In essence, historical production and injection data are assimilated using an Ensemble Kalman Filter (EnKF) to infer the petrophysical parameters and create a predictive model of the field. This model is then used with Evolutionary Algorithms (EA) to find the pareto front for multiple optimization objectives like production, injection and NPV. Ultimately, the main objective of Data Physics is to enable Closed Loop Optimization. The technology was applied on a small section of a very large field in the Gulf of San Jorge comprised of 134 wells including 83 active producers and 27 active water injectors; up to 12 mandrels per well are used to provide with selective injection, while production is carried out in a comingled manner. Production zonal allocation is calculated using an in-house process based on swabbing tests and recovery factors and is used as input to the Data Physics application, while injection allocation is based on tracer logs performed in each injection well twice a year. This paper describes the modeling and optimization phases as well as the implementation in the field and the results obtained after performing two close loop optimization cycles. The initial model was developed between October and December 2018 and initial field implementation took place between January to March 2019. A second optimization cycle was then executed in January 2020 and results observed for several months.


2021 ◽  
Author(s):  
◽  
Atiya Masood

<p>The Job Shop Scheduling (JSS) problem is considered to be a challenging one due to practical requirements such as multiple objectives and the complexity of production flows. JSS has received great attention because of its broad applicability in real-world situations. One of the prominent solutions approaches to handling JSS problems is to design effective dispatching rules. Dispatching rules are investigated broadly in both academic and industrial environments because they are easy to implement (by computers and shop floor operators) with a low computational cost. However, the manual development of dispatching rules is time-consuming and requires expert knowledge of the scheduling environment. The hyper-heuristic approach that uses genetic programming (GP) to solve JSS problems is known as GP-based hyper-heuristic (GP-HH). GP-HH is a very useful approach for discovering dispatching rules automatically.  Although it is technically simple to consider only a single objective optimization for JSS, it is now widely evidenced in the literature that JSS by nature presents several potentially conflicting objectives, including the maximal flowtime, mean flowtime, and mean tardiness. A few studies in the literature attempt to solve many-objective JSS with more than three objectives, but existing studies have some major limitations. First, many-objective JSS problems have been solved by multi-objective evolutionary algorithms (MOEAs). However, recent studies have suggested that the performance of conventional MOEAs is prone to the scalability challenge and degrades dramatically with many-objective optimization problems (MaOPs). Many-objective JSS using MOEAs inherit the same challenge as MaOPs. Thus, using MOEAs for many-objective JSS problems often fails to select quality dispatching rules. Second, although the reference points method is one of the most prominent and efficient methods for diversity maintenance in many-objective problems, it uses a uniform distribution of reference points which is only appropriate for a regular Pareto-front. However, JSS problems often have irregular Pareto-front and uniformly distributed reference points do not match well with the irregular Pareto-front. It results in many useless points during evolution. These useless points can significantly affect the performance of the reference points-based algorithms. They cannot help to enhance the solution diversity of evolved Pareto-front in many-objective JSS problems. Third, Pareto Local Search (PLS) is a prominent and effective local search method for handling multi-objective JSS optimization problems but the literature does not discover any existing studies which use PLS in GP-HH.  To address these limitations, this thesis's overall goal is to develop GP-HH approaches to evolving effective rules to handle many conflicting objectives simultaneously in JSS problems.  To achieve the first goal, this thesis proposes the first many-objective GP-HH method for JSS problems to find the Pareto-fronts of nondominated dispatching rules. Decision-makers can utilize this GP-HH method for selecting appropriate rules based on their preference over multiple conflicting objectives. This study combines GP with the fitness evaluation scheme of a many-objective reference points-based approach. The experimental results show that the proposed algorithm significantly outperforms MOEAs such as NSGA-II and SPEA2.  To achieve the second goal, this thesis proposes two adaptive reference point approaches (model-free and model-driven). In both approaches, the reference points are generated according to the distribution of the evolved dispatching rules. The model-free reference point adaptation approach is inspired by Particle Swarm Optimization (PSO). The model-driven approach constructs the density model and estimates the density of solutions from each defined sub-location in a whole objective space. Furthermore, the model-driven approach provides smoothness to the model by applying a Gaussian Process model and calculating the area under the mean function. The mean function area helps to find the required number of the reference points in each mean function. The experimental results demonstrate that both adaptive approaches are significantly better than several state-of-the-art MOEAs.  To achieve the third goal, the thesis proposes the first algorithm that combines GP as a global search with PLS as a local search in many-objective JSS. The proposed algorithm introduces an effective fitness-based selection strategy for selecting initial individuals for neighborhood exploration. It defines the GP's proper neighborhood structure and a new selection mechanism for selecting the effective dispatching rules during the local search. The experimental results on the JSS benchmark problem show that the newly proposed algorithm can significantly outperform its baseline algorithm (GP-NSGA-III).</p>


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3152
Author(s):  
Carine M. Rebello ◽  
Márcio A. F. Martins ◽  
Daniel D. Santana ◽  
Alírio E. Rodrigues ◽  
José M. Loureiro ◽  
...  

This work presents a novel approach for multiobjective optimization problems, extending the concept of a Pareto front to a new idea of the Pareto region. This new concept provides all the points beyond the Pareto front, leading to the same optimal condition with statistical assurance. This region is built using a Fisher–Snedecor test over an augmented Lagragian function, for which deductions are proposed here. This test is meant to provide an approximated depiction of the feasible operation region while using meta-heuristic optimization results to extract this information. To do so, a Constrained Sliding Particle Swarm Optimizer (CSPSO) was applied to solve a series of four benchmarks and a case study. The proposed test analyzed the CSPSO results, and the novel Pareto regions were estimated. Over this Pareto region, a clustering strategy was also developed and applied to define sub-regions that prioritize one of the objectives and an intermediary region that provides a balance between objectives. This is a valuable tool in the context of process optimization, aiming at assertive decision-making purposes. As this is a novel concept, the only way to compare it was to draw the entire regions of the benchmark functions and compare them with the methodology result. The benchmark results demonstrated that the proposed method could efficiently portray the Pareto regions. Then, the optimization of a Pressure Swing Adsorption unit was performed using the proposed approach to provide a practical application of the methodology developed here. It was possible to build the Pareto region and its respective sub-regions, where each process performance parameter is prioritized. The results demonstrated that this methodology could be helpful in processes optimization and operation. It provides more flexibility and more profound knowledge of the system under evaluation.


Sign in / Sign up

Export Citation Format

Share Document