The experimental study of population-based parameter optimization algorithms on rule-based ecological modelling

Author(s):  
Hongqing Cao ◽  
Friedrich Recknagel ◽  
Philip T. Orr
2019 ◽  
Author(s):  
J Kyle Medley ◽  
Shaik Asifullah ◽  
Joseph Hellerstein ◽  
Herbert M Sauro

Mechanistic kinetic models of biological pathways are an important tool for understanding biological systems. Constructing kinetic models requires fitting the parameters to experimental data. However, parameter fitting on these models is a non–convex, non–linear optimization problem. Many algorithms have been proposed to addressing optimization for parameter fitting including globally convergent, population–based algorithms. The computational complexity of the this optimization for even modest models means that parallelization is essential. Past approaches to parameter optimization have focused on parallelizing a particular algorithm. However, this requires re–implementing the algorithm usinga distributed computing framework, which requires a significant investment of time and effort. There are two major drawbacks of this approach: First, the choice of best algorithm may depend on the model. Given the large variety of optimization algorithms available, it is difficult to re–implement every potentially useful algorithm. Second, when new advances are made in a given optimization algorithm, the parallel implementation must be updated to take advantage of these advantages. Thus, there is a continual burden placed on the parallel implementation. The drawbacks of re–implementing algorithms lead us to a different approach to parallelizing parameter optimization. Instead of parallelizing the algorithms themselves, we run many instances of the algorithm on single cores. This provides great flexibility as to the choice of algorithms by allowing us to reuse previous implementations. Also, it does not require the creation and maintenance of parallel versions of optimization algorithms. This approach is known as the island method. To our knowledge, the utility of the island method for parameter fitting in systems biology has not been previously demonstrated. For the parameter fitting problem, we allow islands to exchange information about their “best” solutions so that all islands leverage the discoveries of the few. This turns out to be avery effective in practice, leading to super–linear speedups. That is, if a single processor finds the optimal value of parameters in time t, then N processors exchanging information in this way find the optimal value much faster than t/N. We show that the island method is able to consistently provide good speedups for these problems. We also benchmark the island method against a variety of large, challenging kinetic models and show that it is able to consistently improve the quality of fit in less time than a single–threaded implementation.Our software is available at https://github.com/sys-bio/sabaody under a Apache 2.0 license.Contactmailto:[email protected]


2019 ◽  
Vol 2 (3) ◽  
pp. 508-517
Author(s):  
FerdaNur Arıcı ◽  
Ersin Kaya

Optimization is a process to search the most suitable solution for a problem within an acceptable time interval. The algorithms that solve the optimization problems are called as optimization algorithms. In the literature, there are many optimization algorithms with different characteristics. The optimization algorithms can exhibit different behaviors depending on the size, characteristics and complexity of the optimization problem. In this study, six well-known population based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA and particle swarm optimization - PSO) were used. These six algorithms were performed on the CEC’17 test functions. According to the experimental results, the algorithms were compared and performances of the algorithms were evaluated.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1190
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Štěpán Hubálovský

There are many optimization problems in the different disciplines of science that must be solved using the appropriate method. Population-based optimization algorithms are one of the most efficient ways to solve various optimization problems. Population-based optimization algorithms are able to provide appropriate solutions to optimization problems based on a random search of the problem-solving space without the need for gradient and derivative information. In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented; it can be applied to solve optimization problems in various fields of science. The main idea in designing the GMBO is to use more effectively the information of different members of the algorithm population based on two selected groups, with the titles of the good group and the bad group. Two new composite members are obtained by averaging each of these groups, which are used to update the population members. The various stages of the GMBO are described and mathematically modeled with the aim of being used to solve optimization problems. The performance of the GMBO in providing a suitable quasi-optimal solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is evaluated. In addition, the optimization results obtained from the proposed GMBO were compared with eight other widely used optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching–Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results indicated the acceptable performance of the proposed GMBO, and, based on the analysis and comparison of the results, it was determined that the GMBO is superior and much more competitive than the other eight algorithms.


2017 ◽  
Vol 24 (6) ◽  
pp. 448-450 ◽  
Author(s):  
Sachiko Ono ◽  
Yosuke Ono ◽  
Nobuaki Michihata ◽  
Yusuke Sasabuchi ◽  
Hideo Yasunaga

Pokémon GO (Niantic Labs, released on 22 July 2016 in Japan) is an augmented reality game that gained huge popularity worldwide. Despite concern about Pokémon GO–related traffic collisions, the effect of playing Pokémon GO on the incidence of traffic injuries remains unknown. We performed a population-based quasi-experimental study using national data from the Institute for Traffic Accident Research and Data Analysis, Japan. The outcome was incidence of traffic injuries. Of 127 082 000 people in Japan, 886 fatal traffic injuries were observed between 1 June and 31 August in 2016. Regression discontinuity analysis showed a non-significant change in incidence of fatal traffic injuries after the Pokémon GO release (0.017 deaths per million, 95%CI −0.036 to 0.071). This finding was similar to that obtained from a difference-in-differences analysis. Effect of Pokémon GO on fatal traffic injuries may be negligible.


2020 ◽  
Vol 34 (10) ◽  
pp. 13935-13936
Author(s):  
Tato Ange ◽  
Nkambou Roger

This paper presents a simple and intuitive technique to accelerate the convergence of first-order optimization algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient and the previous step taken during training. Results after tests show that the technique has the potential to significantly improve the performance of existing first-order optimization algorithms.


Sign in / Sign up

Export Citation Format

Share Document