pareto optimal
Recently Published Documents


TOTAL DOCUMENTS

1502
(FIVE YEARS 359)

H-INDEX

49
(FIVE YEARS 7)

2022 ◽  
Vol 48 ◽  
pp. 103803
Author(s):  
Markus Mühlbauer ◽  
Fabian Rang ◽  
Herbert Palm ◽  
Oliver Bohlen ◽  
Michael A. Danzer

2022 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Raja Fdhila ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

Particle swarm optimization system based on the distributed architecture over multiple sub-swarms has shown its efficiency for static optimization and has not been studied to solve dynamic multi-objective problems (DMOPs). When solving DMOP, tracking the best solutions over time and ensuring good exploitation and exploration are the main challenging tasks. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm including two parallel optimization levels. At the first level, all solutions are managed in a single search space. When a dynamic change is successfully detected in the objective values, the Pareto ranking operator is used to enable a multiple sub-swarm’ subdivisions and processing which drives the second level of enhanced exploitation. A dynamic handling strategy based on random detectors is used to track the changes of the objective function due to time-varying parameters. A response strategy consisting in re-evaluate all unimproved solutions and replacing them with newly generated ones is also implemented. Inverted generational distance, mean inverted generational distance, and hypervolume difference metrics are used to assess the DPb-MOPSO performances. All quantitative results are analyzed using Friedman's analysis of variance while the Lyapunov theorem is used for stability analysis. Compared with several multi-objective evolutionary algorithms, the DPb-MOPSO is robust in solving 21 complex problems over a range of changes in both the Pareto optimal set and Pareto optimal front. For 13 UDF and ZJZ functions, DPb-MOPSO can solve 8/13 and 7/13 on IGD and HVD with moderate changes. For the 8 FDA and dMOP benchmarks, DPb-MOPSO was able to resolve 4/8 with severe change on MIGD, and 5/8 for moderate and slight changes. However, for the 3 kind of environmental changes, DPb-MOPSO assumes 4/8 of the solving function on IGD and HVD metrics.<br>


Author(s):  
Bhupinder Singh Saini ◽  
Michael Emmerich ◽  
Atanu Mazumdar ◽  
Bekir Afsar ◽  
Babooshka Shavazipour ◽  
...  

AbstractWe introduce novel concepts to solve multiobjective optimization problems involving (computationally) expensive function evaluations and propose a new interactive method called O-NAUTILUS. It combines ideas of trade-off free search and navigation (where a decision maker sees changes in objective function values in real time) and extends the NAUTILUS Navigator method to surrogate-assisted optimization. Importantly, it utilizes uncertainty quantification from surrogate models like Kriging or properties like Lipschitz continuity to approximate a so-called optimistic Pareto optimal set. This enables the decision maker to search in unexplored parts of the Pareto optimal set and requires a small amount of expensive function evaluations. We share the implementation of O-NAUTILUS as open source code. Thanks to its graphical user interface, a decision maker can see in real time how the preferences provided affect the direction of the search. We demonstrate the potential and benefits of O-NAUTILUS with a problem related to the design of vehicles.


2022 ◽  
Vol 10 (1) ◽  
pp. 236-245
Author(s):  
Ahmed Abed ◽  
Om El Khaiat Moustachi

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Hugo Monzón Maldonado ◽  
Hernán Aguirre ◽  
Sébastien Verel ◽  
Arnaud Liefooghe ◽  
Bilel Derbel ◽  
...  

Achieving a high-resolution approximation and hitting the Pareto optimal set with some if not all members of the population is the goal for multi- and many-objective optimization problems, and more so in real-world applications where there is also the desire to extract knowledge about the problem from this set. The task requires not only to reach the Pareto optimal set but also to be able to continue discovering new solutions, even if the population is filled with them. Particularly in many-objective problems where the population may not be able to accommodate the full Pareto optimal set. In this work, our goal is to investigate some tools to understand the behavior of algorithms once they converge and how their population size and particularities of their selection mechanism aid or hinder their ability to keep finding optimal solutions. Through the use of features that look into the population composition during the search process, we will look into the algorithm’s behavior and dynamics and extract some insights. Features are defined in terms of dominance status, membership to the Pareto optimal set, recentness of discovery, and replacement of optimal solutions. Complementing the study with features, we also look at the approximation through the accumulated number of Pareto optimal solutions found and its relationship to a common metric, the hypervolume. To generate the data for analysis, the chosen problem is MNK-landscapes with settings that make it easy to converge, enumerable for instances with 3 to 6 objectives. Studied algorithms were selected from representative multi- and many-objective optimization approaches such as Pareto dominance, relaxation of Pareto dominance, indicator-based, and decomposition.


2021 ◽  
Vol 3 (4) ◽  
Author(s):  
Arnaud Z. Dragicevic ◽  
Serge Garcia

Public authorities frequently mandate public or private agencies to manage their renewable natural resources. Contrary to the agency, which is an expert in renewable natural resource management, public authorities usually ignore the sustainable level of harvest. In this note, we first model the contractual relationship between a principal, who owns the renewable natural resource, and an agent, who holds private information on its sustainable level of harvest. We then look for the Pareto-optimal allocations. In the situation of an imperfect information setting, we find that the Pareto-optimal contracting depends on the probability that the harvesting level stands outside the sustainability interval. The information rent held by the agent turns out to be unavoidable, such that stepping outside the sustainability interval implies the possibility of depletion of the renewable natural resource. This, in turn, compromises the maintenance of the ecological balance in natural ecosystems.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Raja Fdhila ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

Particle swarm optimization system based on the distributed architecture has shown its efficiency for static optimization and has not been studied to solve dynamic multiobjective problems (DMOPs). When solving DMOP, tracking the best solutions over time and ensuring good exploitation and exploration are the main challenging tasks. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm including two parallel optimization levels. At the first level, all solutions are managed in a single search space. When a dynamic change is successfully detected, the Pareto ranking operator is used to enable a multiswarm subdivisions and processing which drives the second level of enhanced exploitation. A dynamic handling strategy based on random detectors is used to track the changes of the objective function due to time-varying parameters. A response strategy consisting in re-evaluate all unimproved solutions and replacing them with newly generated ones is also implemented. Inverted generational distance, mean inverted generational distance, and hypervolume difference metrics are used to assess the DPb-MOPSO performances. All quantitative results are analyzed using Friedman's analysis while the Lyapunov theorem is used for stability analysis. Compared with several multi-objective evolutionary algorithms, the DPb-MOPSO is robust in solving 21 complex problems over a range of changes in both the Pareto optimal set and Pareto optimal front. For 13 UDF and ZJZ functions, DPb-MOPSO can solve 8/13 and 7/13 on IGD and HVD with moderate changes. For the 8 FDA and dMOP benchmarks, DPb-MOPSO was able to resolve 4/8 with severe change on MIGD, and 5/8 for moderate and slight changes. However, for the 3 kind of environmental changes, DPb-MOPSO assumes 4/8 of the solving function on IGD and HVD. <br>


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Raja Fdhila ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

Particle swarm optimization system based on the distributed architecture has shown its efficiency for static optimization and has not been studied to solve dynamic multiobjective problems (DMOPs). When solving DMOP, tracking the best solutions over time and ensuring good exploitation and exploration are the main challenging tasks. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm including two parallel optimization levels. At the first level, all solutions are managed in a single search space. When a dynamic change is successfully detected, the Pareto ranking operator is used to enable a multiswarm subdivisions and processing which drives the second level of enhanced exploitation. A dynamic handling strategy based on random detectors is used to track the changes of the objective function due to time-varying parameters. A response strategy consisting in re-evaluate all unimproved solutions and replacing them with newly generated ones is also implemented. Inverted generational distance, mean inverted generational distance, and hypervolume difference metrics are used to assess the DPb-MOPSO performances. All quantitative results are analyzed using Friedman's analysis while the Lyapunov theorem is used for stability analysis. Compared with several multi-objective evolutionary algorithms, the DPb-MOPSO is robust in solving 21 complex problems over a range of changes in both the Pareto optimal set and Pareto optimal front. For 13 UDF and ZJZ functions, DPb-MOPSO can solve 8/13 and 7/13 on IGD and HVD with moderate changes. For the 8 FDA and dMOP benchmarks, DPb-MOPSO was able to resolve 4/8 with severe change on MIGD, and 5/8 for moderate and slight changes. However, for the 3 kind of environmental changes, DPb-MOPSO assumes 4/8 of the solving function on IGD and HVD. <br>


Materials ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 7746
Author(s):  
Kishan Fuse ◽  
Rakesh Chaudhari ◽  
Jay Vora ◽  
Vivek K. Patel ◽  
Luis Norberto Lopez de Lacalle

Machining of Titanium alloys (Ti6Al4V) becomes more vital due to its essential role in biomedical, aerospace, and many other industries owing to the enhanced engineering properties. In the current study, a Box–Behnken design of the response surface methodology (RSM) was used to investigate the performance of the abrasive water jet machining (AWJM) of Ti6Al4V. For process parameter optimization, a systematic strategy combining RSM and a heat-transfer search (HTS) algorithm was investigated. The nozzle traverse speed (Tv), abrasive mass flow rate (Af), and stand-off distance (Sd) were selected as AWJM variables, whereas the material removal rate (MRR), surface roughness (SR), and kerf taper angle (θ) were considered as output responses. Statistical models were developed for the response, and Analysis of variance (ANOVA) was executed for determining the robustness of responses. The single objective optimization result yielded a maximum MRR of 0.2304 g/min (at Tv of 250 mm/min, Af of 500 g/min, and Sd of 1.5 mm), a minimum SR of 2.99 µm, and a minimum θ of 1.72 (both responses at Tv of 150 mm/min, Af of 500 g/min, and Sd of 1.5 mm). A multi-objective HTS algorithm was implemented, and Pareto optimal points were produced. 3D and 2D plots were plotted using Pareto optimal points, which highlighted the non-dominant feasible solutions. The effectiveness of the suggested model was proved in predicting and optimizing the AWJM variables. The surface morphology of the machined surfaces was investigated using the scanning electron microscope. The confirmation test was performed using optimized cutting parameters to validate the results.


Author(s):  
Bingqian Lu ◽  
Jianyi Yang ◽  
Weiwen Jiang ◽  
Yiyu Shi ◽  
Shaolei Ren

Convolutional neural networks (CNNs) are used in numerous real-world applications such as vision-based autonomous driving and video content analysis. To run CNN inference on various target devices, hardware-aware neural architecture search (NAS) is crucial. A key requirement of efficient hardware-aware NAS is the fast evaluation of inference latencies in order to rank different architectures. While building a latency predictor for each target device has been commonly used in state of the art, this is a very time-consuming process, lacking scalability in the presence of extremely diverse devices. In this work, we address the scalability challenge by exploiting latency monotonicity --- the architecture latency rankings on different devices are often correlated. When strong latency monotonicity exists, we can re-use architectures searched for one proxy device on new target devices, without losing optimality. In the absence of strong latency monotonicity, we propose an efficient proxy adaptation technique to significantly boost the latency monotonicity. Finally, we validate our approach and conduct experiments with devices of different platforms on multiple mainstream search spaces, including MobileNet-V2, MobileNet-V3, NAS-Bench-201, ProxylessNAS and FBNet. Our results highlight that, by using just one proxy device, we can find almost the same Pareto-optimal architectures as the existing per-device NAS, while avoiding the prohibitive cost of building a latency predictor for each device.


Sign in / Sign up

Export Citation Format

Share Document