scholarly journals Automated multi-objective calibration of biological agent-based simulations

2016 ◽  
Vol 13 (122) ◽  
pp. 20160543 ◽  
Author(s):  
Mark N. Read ◽  
Kieran Alden ◽  
Louis M. Rose ◽  
Jon Timmis

Computational agent-based simulation (ABS) is increasingly used to complement laboratory techniques in advancing our understanding of biological systems. Calibration, the identification of parameter values that align simulation with biological behaviours, becomes challenging as increasingly complex biological domains are simulated. Complex domains cannot be characterized by single metrics alone, rendering simulation calibration a fundamentally multi-metric optimization problem that typical calibration techniques cannot handle. Yet calibration is an essential activity in simulation-based science; the baseline calibration forms a control for subsequent experimentation and hence is fundamental in the interpretation of results. Here, we develop and showcase a method, built around multi-objective optimization, for calibrating ABSs against complex target behaviours requiring several metrics (termed objectives ) to characterize. Multi-objective calibration (MOC) delivers those sets of parameter values representing optimal trade-offs in simulation performance against each metric, in the form of a Pareto front. We use MOC to calibrate a well-understood immunological simulation against both established a priori and previously unestablished target behaviours. Furthermore, we show that simulation-borne conclusions are broadly, but not entirely, robust to adopting baseline parameter values from different extremes of the Pareto front, highlighting the importance of MOC's identification of numerous calibration solutions. We devise a method for detecting overfitting in a multi-objective context, not previously possible, used to save computational effort by terminating MOC when no improved solutions will be found. MOC can significantly impact biological simulation, adding rigour to and speeding up an otherwise time-consuming calibration process and highlighting inappropriate biological capture by simulations that cannot be well calibrated. As such, it produces more accurate simulations that generate more informative biological predictions.

2020 ◽  
Vol 143 (5) ◽  
Author(s):  
Carl Ehrett ◽  
D. Andrew Brown ◽  
Evan Chodora ◽  
Christopher Kitchens ◽  
Sez Atamturktur

Abstract Computer model calibration typically operates by fine-tuning parameter values in a computer model so that the model output faithfully predicts reality. By using performance targets in place of observed data, we show that calibration techniques can be repurposed for solving multi-objective design problems. Our approach allows us to consider all relevant sources of uncertainty as an integral part of the design process. We demonstrate our proposed approach through both simulation and fine-tuning material design settings to meet performance targets for a wind turbine blade.


Author(s):  
Wenqiang Yuan ◽  
Yusheng Liu

In this work, we present a new multi-objective particle swarm optimization algorithm (PSO) characterized by the use of the geometrization analysis of the particles. The proposed method, called geometry analysis PSO (GAPSO), firstly parameterize the data points of the optimization model of mechatronic system to obtain their parameter values, then one curve or one surface is adopted to fit these points and the tangent value and normal value for each point are acquired, eventually the particles are guided by the use of its derivative value and tangent value to approximate the true Pareto front and get a uniform distribution. Our proposed method is compared with respect to two multi-objective metaheuristics representative of the state-of-the-art in this area. The experiments carried out indicate that GAPSO obtains remarkable results in terms of both accuracy and distribution.


2007 ◽  
Vol 4 (3) ◽  
pp. 1031-1067 ◽  
Author(s):  
N. Chahinian ◽  
R. Moussa

Abstract. A conceptual lumped rainfall-runoff flood event model was developed and applied on the Gardon catchment located in southern France and various mono-objective and multi-objective functions were used for its calibration. The model was calibrated on 15 events and validated on 14 others. The results of both the calibration and validation phases are compared on the basis of their performance with regards to six criteria, three global criteria and three relative criteria representing volume, peakflow, and the root mean square error. The first type of criteria gives more weight to strong events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent on the type of criteria used. Significant trade-offs are observed between the different objectives: no unique set of parameter is able to satisfy all objectives simultaneously. Instead, the solution to the calibration problem is given by a set of Pareto optimal solutions. From this set of optimal solutions, a balanced aggregated objective function is proposed, as a compromise between up to three objective functions. The mono-objective and multi-objective calibration strategies are compared both in terms of parameter variation bounds and simulation quality. The results of this study indicate that two well chosen and non-redundant objective functions are sufficient to calibrate the model and that the use of three objective functions does not necessarily yield different results. The problems of non-uniqueness in model calibration, and the choice of the adequate objective functions for flood event models, emphasise the importance of the modeller's intervention. The recent advances in automatic optimisation techniques do not minimise the user's responsibility, who has to chose multiple criteria based on the aims of the study, his appreciation on the errors induced by data and model structure and his knowledge of the catchment's hydrology.


2018 ◽  
Vol 21 (2) ◽  
Author(s):  
Katherine Dahiana Vera Escobar ◽  
Fabio Lopez-Pires ◽  
Benjamin Baran ◽  
Fernando Sandoya

The Maximum Diversity (MD) problem is the process of selecting a subset of elements where the diversity among selected elements is maximized. Several diversity measures were already studied in the literature, optimizing the problem considered in a pure mono-objective approach. This work presents for the first time multi-objective approaches for the MD problem, considering the simultaneous optimization of the following five diversity measures: (i) Max-Sum, (ii) Max-Min, (iii) Max-MinSum, (iv) Min-Diff and (v) Min-P-center. Two different optimization models are proposed: (i) Multi-Objective Maximum Diversity (MMD) model, where the number of elements to be selected is defined a-priori, and (ii) Multi-Objective Maximum Average Diversity (MMAD) model, where the number of elements to be selected is also a decision variable. To solve the formulated problems, a Multi-Objective Evolutionary Algorithm (MOEA) is presented. Experimental results demonstrate that the proposed MOEA found good quality solutions, i.e. between 98.85% and 100% of the optimal Pareto front when considering the hypervolume for comparison purposes.


2009 ◽  
Vol 13 (4) ◽  
pp. 519-535 ◽  
Author(s):  
R. Moussa ◽  
N. Chahinian

Abstract. A conceptual lumped rainfall-runoff flood event model was developed and applied on the Gardon catchment located in Southern France and various single-objective and multi-objective functions were used for its calibration. The model was calibrated on 15 events and validated on 14 others. The results of both the calibration and validation phases are compared on the basis of their performance with regards to six criteria, three global criteria and three relative criteria representing volume, peakflow, and the root mean square error. The first type of criteria gives more weight to large events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent on the type of criteria used. Significant trade-offs are observed between the different objectives: no unique set of parameters is able to satisfy all objectives simultaneously. Instead, the solution to the calibration problem is given by a set of Pareto optimal solutions. From this set of optimal solutions, a balanced aggregated objective function is proposed, as a compromise between up to three objective functions. The single-objective and multi-objective calibration strategies are compared both in terms of parameter variation bounds and simulation quality. The results of this study indicate that two well chosen and non-redundant objective functions are sufficient to calibrate the model and that the use of three objective functions does not necessarily yield different results. The problems of non-uniqueness in model calibration, and the choice of the adequate objective functions for flood event models, emphasise the importance of the modeller's intervention. The recent advances in automatic optimisation techniques do not minimise the user's responsibility, who has to choose multiple criteria based on the aims of the study, his appreciation on the errors induced by data and model structure and his knowledge of the catchment's hydrology.


2020 ◽  
Author(s):  
Andrew Lensen ◽  
Mengjie Zhang ◽  
Bing Xue

© 2020, Springer Science+Business Media, LLC, part of Springer Nature. Manifold learning techniques have become increasingly valuable as data continues to grow in size. By discovering a lower-dimensional representation (embedding) of the structure of a dataset, manifold learning algorithms can substantially reduce the dimensionality of a dataset while preserving as much information as possible. However, state-of-the-art manifold learning algorithms are opaque in how they perform this transformation. Understanding the way in which the embedding relates to the original high-dimensional space is critical in exploratory data analysis. We previously proposed a Genetic Programming method that performed manifold learning by evolving mappings that are transparent and interpretable. This method required the dimensionality of the embedding to be known a priori, which makes it hard to use when little is known about a dataset. In this paper, we substantially extend our previous work, by introducing a multi-objective approach that automatically balances the competing objectives of manifold quality and dimensionality. Our proposed approach is competitive with a range of baseline and state-of-the-art manifold learning methods, while also providing a range (front) of solutions that give different trade-offs between quality and dimensionality. Furthermore, the learned models are shown to often be simple and efficient, utilising only a small number of features in an interpretable manner.


Author(s):  
Arthur Francisco ◽  
Thomas Lavie ◽  
Bernard Villechaise

The numerical optimization of the connecting rod big-end lubrication involves several main steps. The first, which can be considered as the most important one, is the identification of the main input factors and their varying range. In the same time, the meaningful resulting values have to be identified because the optimization will be performed on the basis of this choice. The computing time for a TEHD solution prevents from performing a huge amount of calculi to draw the Pareto-Front of the solutions. Thus, the next step is the creation of a metamodel, based on polynomials, with a good predictability property and a low computing cost. In the third step, a fast Multi Objective Optimization is performed on the metamodel. The Pareto-Front, which represents the best trade-offs of solutions, is identified: one can now easily choose the input parameters which will give a particular desired solution. In the last step, the robustness of the solutions has to be checked: if a given solution is chosen, the corresponding input parameters have to enclose a minimal uncertainty gap to be realistic. Otherwise, this wanted solution will never be reached, because in a real-life problem, the parameter values are not deterministic.


2019 ◽  
Vol 19 (6) ◽  
pp. 1167-1187 ◽  
Author(s):  
Menno W. Straatsma ◽  
Jan M. Fliervoet ◽  
Johan A. H. Kabout ◽  
Fedor Baart ◽  
Maarten G. Kleinhans

Abstract. Adapting densely populated deltas to the combined impacts of climate change and socioeconomic developments presents a major challenge for their sustainable development in the 21st century. Decisions for the adaptations require an overview of cost and benefits and the number of stakeholders involved, which can be used in stakeholder discussions. Therefore, we quantified the trade-offs of common measures to compensate for an increase in discharge and sea level rise on the basis of relevant, but inexhaustive, quantitative variables. We modeled the largest delta distributary of the Rhine River with adaptation scenarios driven by (1) the choice of seven measures, (2) the areas owned by the two largest stakeholders (LS) versus all stakeholders (AS) based on a priori stakeholder preferences, and (3) the ecological or hydraulic design principle. We evaluated measures by their efficiency in flood hazard reduction, potential biodiversity, number of stakeholders as a proxy for governance complexity, and measure implementation cost. We found that only floodplain lowering over the whole study area can offset the altered hydrodynamic boundary conditions; for all other measures, additional dike raising is required. LS areas comprise low hanging fruits for water level lowering due to the governance simplicity and hydraulic efficiency. Natural management of meadows (AS), after roughness smoothing and floodplain lowering, represents the optimum combination between potential biodiversity and flood hazard lowering, as it combines a high potential biodiversity with a relatively low hydrodynamic roughness. With this concept, we step up to a multidisciplinary, quantitative multi-parametric, and multi-objective optimization and support the negotiations among stakeholders in the decision-making process.


2020 ◽  
Author(s):  
Andrew Lensen ◽  
Mengjie Zhang ◽  
Bing Xue

© 2020, Springer Science+Business Media, LLC, part of Springer Nature. Manifold learning techniques have become increasingly valuable as data continues to grow in size. By discovering a lower-dimensional representation (embedding) of the structure of a dataset, manifold learning algorithms can substantially reduce the dimensionality of a dataset while preserving as much information as possible. However, state-of-the-art manifold learning algorithms are opaque in how they perform this transformation. Understanding the way in which the embedding relates to the original high-dimensional space is critical in exploratory data analysis. We previously proposed a Genetic Programming method that performed manifold learning by evolving mappings that are transparent and interpretable. This method required the dimensionality of the embedding to be known a priori, which makes it hard to use when little is known about a dataset. In this paper, we substantially extend our previous work, by introducing a multi-objective approach that automatically balances the competing objectives of manifold quality and dimensionality. Our proposed approach is competitive with a range of baseline and state-of-the-art manifold learning methods, while also providing a range (front) of solutions that give different trade-offs between quality and dimensionality. Furthermore, the learned models are shown to often be simple and efficient, utilising only a small number of features in an interpretable manner.


2015 ◽  
Vol 23 (1) ◽  
pp. 131-159 ◽  
Author(s):  
Tobias Friedrich ◽  
Frank Neumann ◽  
Christian Thyssen

Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations.


Sign in / Sign up

Export Citation Format

Share Document