scholarly journals 2 Indicator-Based Multiobjective Search

2015 ◽  
Vol 23 (3) ◽  
pp. 369-395 ◽  
Author(s):  
Dimo Brockhoff ◽  
Tobias Wagner ◽  
Heike Trautmann

In multiobjective optimization, set-based performance indicators are commonly used to assess the quality of a Pareto front approximation. Based on the scalarization obtained by these indicators, a performance comparison of multiobjective optimization algorithms becomes possible. The [Formula: see text] and the hypervolume (HV) indicator represent two recommended approaches which have shown a correlated behavior in recent empirical studies. Whereas the HV indicator has been comprehensively analyzed in the last years, almost no studies on the [Formula: see text] indicator exist. In this extended version of our previous conference paper, we thus perform a comprehensive investigation of the properties of the [Formula: see text] indicator in a theoretical and empirical way. The influence of the number and distribution of the weight vectors on the optimal distribution of [Formula: see text] solutions is analyzed. Based on a comparative analysis, specific characteristics and differences of the [Formula: see text] and HV indicator are presented. Furthermore, the [Formula: see text] indicator is integrated into an indicator-based steady-state evolutionary multiobjective optimization algorithm (EMOA). It is shown that the so-called [Formula: see text]-EMOA can accurately approximate the optimal distribution of [Formula: see text] solutions regarding [Formula: see text].

2018 ◽  
Vol 23 (3) ◽  
pp. 333-354 ◽  
Author(s):  
Andrés Vargas

The averaged Hausdorff distance ∆p is an inframetric, recently introduced in evolutionary multiobjective optimization (EMO) as a tool to measure the optimality of finite size approximations to the Pareto front associated to a multiobjective optimization problem (MOP). Tools of this kind are called performance indicators, and their quality depends on the useful criteria they provide to evaluate the suitability of different candidate solutions to a given MOP. We present here a purely theoretical study of the compliance of the ∆p -indicator to the notion of Pareto optimality. Since ∆p is defined in terms of a modified version of other well- known indicators, namely the generational distance GDp , and the inverted generational distance IGDp , specific criteria for the Pareto compliance of each one of them is discussed in detail. In doing so, we review some previously available knowledge on the behavior of these indicators, correcting inaccuracies found in the literature, and establish new and more general results, including detailed proofs and examples of illustrative situations.


2020 ◽  
Author(s):  
Maria C. Cunha ◽  
João Marques

<p>Multiobjective water distribution networks (WDNs) are a very lively area of research (Marques et al., 2018). To evaluate the performance of these algorithms, different metrics can be used to quantify and compare the quality of the solutions during the run-time and at the end-time of the optimization process. The quality evaluation of the set of non-dominated solutions found by these algorithms is not a trivial process. The literature review by Audet et al. (2018) includes 57 distinct performance indicators that can be used to evaluate solutions provided by multiobjective algorithms, and groups these indicators into four categories: cardinality, convergence, distribution and spread. These categories aim at characterizing, respectively, the number of solutions provided by each algorithm, the approximation of the solutions to the best-known front, the distribution of solutions along the front and the range of the set of solutions found.  To evaluate a multiobjective algorithm, performance indicators that cover all these four categories should be considered to prevent any kind of misleading conclusions. The authors have recently proposed a new multiobjective simulated annealing algorithm. It is an enhanced version of the algorithm presented in (Marques et al., 2018) in that it uses special features to generate candidate solutions and a final step that involves a local search. Different generation processes guide the search and allow the algorithm to reach some parts of the Pareto front that would not be possible if a single generation process was used. The local search, a reannealing phase, is implemented as a supplemental phase of the algorithm to concentrate the search in specific areas of the front to identify the best possible solutions. The present work proposes to evaluate the performance of this algorithm by means of performance indicators of different categories, computed for a set of different benchmark WDNs presented in Wang et al (2015). From the results it can be concluded that the proposed algorithm achieves higher quality solutions than other algorithms, and does so without increasing the computational effort. The results found are evaluated with performance metrics from the four categories.</p><p> </p><p>Acknowledgments</p><p>This work is partially supported by the Portuguese Foundation for Science and Technology under project grant UIDB/00308/2020.</p><p> </p><p>References</p><p>Audet, C., Bigeon, J., Cartier, D., and Le, S. (2018). Performance indicators in multiobjective optimization. European journal of operational research, 1–39.</p><p>Marques, J.,  Cunha,  M. and Savić, D. (2018). Many-objective optimization model for the flexible design of water distribution networks. Jounal Environmental Management, 226, 308–319.</p><p>Wang, Q., Guidolin, M., Savić, D., and Kapelan, Z. (2015). Two-Objective Design of Benchmark Problems of a Water Distribution System via MOEAs: Towards the Best-Known Approximation of the True Pareto Front. Journal of Water Resources Planning and Management, 141(3), 04014060.</p>


2018 ◽  
Vol 26 (3) ◽  
pp. 411-440 ◽  
Author(s):  
Hisao Ishibuchi ◽  
Ryo Imada ◽  
Yu Setoguchi ◽  
Yusuke Nojima

The hypervolume indicator has frequently been used for comparing evolutionary multi-objective optimization (EMO) algorithms. A reference point is needed for hypervolume calculation. However, its specification has not been discussed in detail from a viewpoint of fair performance comparison. A slightly worse point than the nadir point is usually used for hypervolume calculation in the EMO community. In this paper, we propose a reference point specification method for fair performance comparison of EMO algorithms. First, we discuss the relation between the reference point specification and the optimal distribution of solutions for hypervolume maximization. It is demonstrated that the optimal distribution of solutions strongly depends on the location of the reference point when a multi-objective problem has an inverted triangular Pareto front. Next, we propose a reference point specification method based on theoretical discussions on the optimal distribution of solutions. The basic idea is to specify the reference point so that a set of well-distributed solutions over the entire linear Pareto front has a large hypervolume and all solutions in such a solution set have similar hypervolume contributions. Then, we examine whether the proposed method can appropriately specify the reference point through computational experiments on various test problems. Finally, we examine the usefulness of the proposed method in a hypervolume-based EMO algorithm. Our discussions and experimental results clearly show that a slightly worse point than the nadir point is not always appropriate for performance comparison of EMO algorithms.


Author(s):  
Andriy Lishchytovych ◽  
Volodymyr Pavlenko

The present article describes setup, configuration and usage of the key performance indicators (KPIs) of members of project teams involved into the software development life cycle. Key performance indicators are described for the full software development life cycle and imply the deep integration with both task tracking systems and project code management systems, as well as a software product quality testing system. To illustrate, we used the extremely popular products - Atlassian Jira (tracking development tasks and bugs tracking system) and git (code management system). The calculation of key performance indicators is given for a team of three developers, two testing engineers responsible for product quality, one designer, one system administrator, one product manager (responsible for setting business requirements) and one project manager. For the key members of the team, it is suggested to use one integral key performance indicator per the role / team member, which reflects the quality of the fulfillment of the corresponding role of the tasks. The model of performance indicators is inverse positive - the initial value of each of the indicators is zero and increases in the case of certain deviations from the standard performance of official duties inherent in a particular role. The calculation of the proposed key performance indicators can be fully automated (in particular, using Atlassian Jira and Atlassian Bitbucket (git) or any other systems, like Redmine, GitLab or TestLink), which eliminates the human factor and, after the automation, does not require any additional effort to calculate. Using such a tool as the key performance indicators allows project managers to completely eliminate bias, reduce the emotional component and provide objective data for the project manager. The described key performance indicators can be used to reduce the time required to resolve conflicts in the team, increase productivity and improve the quality of the software product.


Author(s):  
Jacob Stegenga

Medical scientists employ ‘quality assessment tools’ to assess evidence from medical research, especially from randomized trials. These tools are designed to take into account methodological details of studies, including randomization, subject allocation concealment, and other features of studies deemed relevant to minimizing bias. There are dozens of such tools available. They differ widely from each other, and empirical studies show that they have low inter-rater reliability and low inter-tool reliability. This is an instance of a more general problem called here the underdetermination of evidential significance. Disagreements about the quality of evidence can be due to different—but in principle equally good—weightings of the methodological features that constitute quality assessment tools. Thus, the malleability of empirical research in medicine is deep: in addition to the malleability of first-order empirical methods, such as randomized trials, there is malleability in the tools used to evaluate first-order methods.


1995 ◽  
Vol 5 (5) ◽  
pp. 448-481 ◽  
Author(s):  
R. J. S. Mac Macpherson ◽  
Margaret Taplin

In this paper, we examine the policy preferences of Tasmania's principals concerning accountability criteria and processes, compare their views to other stakeholder groups, and identify issues that warrant attention in principals’ professional development programs. We show that there are many criteria and processes related to the quality of learning, teaching, and leadership that are valued by all stakeholder groups, including principals. We conclude that Tasmanian state schools probably need to review and develop their accountability policies, and that the professional development will need to prepare leaders for specific forms of performance and generate key competencies if more educative forms of accountability practices are to be realised in practice.


2020 ◽  
Vol 9 (5) ◽  
pp. 94
Author(s):  
Khalid Ayad ◽  
Khaoula Dobli Bennani ◽  
Mostafa Elhachloufi

The concept of governance has become ubiquitous since it is recognized as an important tool for improving quality in all aspects of higher education.In Morocco, few scientific articles have dealt with the subject of university governance. Therefore, we will present a general review of the evolution of governance through laws and reforms established by Moroccan Governments from 1975 to 2019. The purpose of the study is to detect the extent of the presence of university governance principles in these reforms.This study enriches the theoretical literature on the crisis of Moroccan university and opens the way to new empirical studies to better understand the perception of university governance concept in the Moroccan context and to improve the quality of higher education and subsequently the economic development of the country.The findings of this study show an increasing evolution of the presence of university governance principles in reforms and higher education laws.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1695
Author(s):  
Constantin-Octavian Andrei ◽  
Sonja Lahtinen ◽  
Markku Poutanen ◽  
Hannu Koivula ◽  
Jan Johansson

The tenth launch (L10) of the European Global Navigation Satellite System Galileo filled in all orbital slots in the constellation. The launch carried four Galileo satellites and took place in July 2018. The satellites were declared operational in February 2019. In this study, we report on the performance of the Galileo L10 satellites in terms of orbital inclination and repeat period parameters, broadcast satellite clocks and signal in space (SiS) performance indicators. We used all available broadcast navigation data from the IGS consolidated navigation files. These satellites have not been reported in the previous studies. First, the orbital inclination (56.7±0.15°) and repeat period (50680.7±0.22 s) for all four satellites are within the nominal values. The data analysis reveals also 13.5-, 27-, 177- and 354-days periodic signals. Second, the broadcast satellite clocks show different correction magnitude due to different trends in the bias component. One clock switch and several other minor correction jumps have occurred since the satellites were declared operational. Short-term discontinuities are within ±1 ps/s, whereas clock accuracy values are constantly below 0.20 m (root-mean-square—rms). Finally, the SiS performance has been very high in terms of availability and accuracy. Monthly SiS availability has been constantly above the target value of 87% and much higher in 2020 as compared to 2019. Monthly SiS accuracy has been below 0.20 m (95th percentile) and below 0.40 m (99th percentile). The performance figures depend on the content and quality of the consolidated navigation files as well as the precise reference products. Nevertheless, these levels of accuracy are well below the 7 m threshold (95th percentile) specified in the Galileo service definition document.


Sign in / Sign up

Export Citation Format

Share Document