Parameter Optimization Algorithms for Evolving Rule Models Applied to Freshwater Ecosystems

2014 ◽  
Vol 18 (6) ◽  
pp. 793-806 ◽  
Author(s):  
Hongqing Cao ◽  
Friedrich Recknagel ◽  
Philip T. Orr
Author(s):  
S. Jiang ◽  
L. Ren ◽  
X. Yang ◽  
M. Ma ◽  
Y. Liu

Abstract. Modelling uncertainties (i.e. input errors, parameter uncertainties and model structural errors) inevitably exist in hydrological prediction. A lot of recent attention has focused on these, of which input error modelling, parameter optimization and multi-model ensemble strategies are the three most popular methods to demonstrate the impacts of modelling uncertainties. In this paper the Xinanjiang model, the Hybrid rainfall–runoff model and the HYMOD model were applied to the Mishui Basin, south China, for daily streamflow ensemble simulation and uncertainty analysis. The three models were first calibrated by two parameter optimization algorithms, namely, the Shuffled Complex Evolution method (SCE-UA) and the Shuffled Complex Evolution Metropolis method (SCEM-UA); next, the input uncertainty was accounted for by introducing a normally-distributed error multiplier; then, the simulation sets calculated from the three models were combined by Bayesian model averaging (BMA). The results show that both these parameter optimization algorithms generate good streamflow simulations; specifically the SCEM-UA can imply parameter uncertainty and give the posterior distribution of the parameters. Considering the precipitation input uncertainty, the streamflow simulation precision does not improve very much. While the BMA combination not only improves the streamflow prediction precision, it also gives quantitative uncertainty bounds for the simulation sets. The SCEM-UA calculated prediction interval is better than the SCE-UA calculated one. These results suggest that considering the model parameters' uncertainties and doing multi-model ensemble simulations are very practical for streamflow prediction and flood forecasting, from which more precision prediction and more reliable uncertainty bounds can be generated.


Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2334
Author(s):  
Ángel Luis Muñoz Castañeda ◽  
Noemí DeCastro-García ◽  
David Escudero García

This work proposes a new algorithm for optimizing hyper-parameters of a machine learning algorithm, RHOASo, based on conditional optimization of concave asymptotic functions. A comparative analysis of the algorithm is presented, giving particular emphasis to two important properties: the capability of the algorithm to work efficiently with a small part of a dataset and to finish the tuning process automatically, that is, without making explicit, by the user, the number of iterations that the algorithm must perform. Statistical analyses over 16 public benchmark datasets comparing the performance of seven hyper-parameter optimization algorithms with RHOASo were carried out. The efficiency of RHOASo presents the positive statistically significant differences concerning the other hyper-parameter optimization algorithms considered in the experiments. Furthermore, it is shown that, on average, the algorithm needs around 70% of the iterations needed by other algorithms to achieve competitive performance. The results show that the algorithm presents significant stability regarding the size of the used dataset partition.


2013 ◽  
Vol 4 (2) ◽  
pp. 29-48 ◽  
Author(s):  
Suman Kumar Saha ◽  
R. Kar ◽  
D. Mandal ◽  
S. P. Ghoshal

Optimal digital filter design in digital signal processing has thrown a growing influence on communication systems. FIR filter design involves multi-parameter optimization, on which the existing optimization algorithms do not work efficiently. For which different optimization techniques can be utilized to determine the impulse response coefficient of a filter and try to meet the ideal frequency response characteristics. In this paper, FIR low pass, high pass, band pass and band stop filters have been designed using a new meta-heuristic search method, called firefly algorithm. Firefly Algorithm is inspired by the flash pattern and characteristics of fireflies. The performance of the designed filters has been compared with that obtained by real coded genetic algorithm (RGA), standard PSO and differential evolution (DE) optimization techniques. Differential evolution (DE) is already one of the most powerful stochastic real-parameter optimization algorithms in current use. Here the firefly algorithm (FA) technique has proven a significant advantage. For the problem at hand, the simulation of designing FIR filters has been done and the simulation results demonstrate that Firefly algorithm is better than other relevant algorithms, not only in the convergence speed but also in the performance of the designed filter.


2019 ◽  
Author(s):  
J Kyle Medley ◽  
Shaik Asifullah ◽  
Joseph Hellerstein ◽  
Herbert M Sauro

Mechanistic kinetic models of biological pathways are an important tool for understanding biological systems. Constructing kinetic models requires fitting the parameters to experimental data. However, parameter fitting on these models is a non–convex, non–linear optimization problem. Many algorithms have been proposed to addressing optimization for parameter fitting including globally convergent, population–based algorithms. The computational complexity of the this optimization for even modest models means that parallelization is essential. Past approaches to parameter optimization have focused on parallelizing a particular algorithm. However, this requires re–implementing the algorithm usinga distributed computing framework, which requires a significant investment of time and effort. There are two major drawbacks of this approach: First, the choice of best algorithm may depend on the model. Given the large variety of optimization algorithms available, it is difficult to re–implement every potentially useful algorithm. Second, when new advances are made in a given optimization algorithm, the parallel implementation must be updated to take advantage of these advantages. Thus, there is a continual burden placed on the parallel implementation. The drawbacks of re–implementing algorithms lead us to a different approach to parallelizing parameter optimization. Instead of parallelizing the algorithms themselves, we run many instances of the algorithm on single cores. This provides great flexibility as to the choice of algorithms by allowing us to reuse previous implementations. Also, it does not require the creation and maintenance of parallel versions of optimization algorithms. This approach is known as the island method. To our knowledge, the utility of the island method for parameter fitting in systems biology has not been previously demonstrated. For the parameter fitting problem, we allow islands to exchange information about their “best” solutions so that all islands leverage the discoveries of the few. This turns out to be avery effective in practice, leading to super–linear speedups. That is, if a single processor finds the optimal value of parameters in time t, then N processors exchanging information in this way find the optimal value much faster than t/N. We show that the island method is able to consistently provide good speedups for these problems. We also benchmark the island method against a variety of large, challenging kinetic models and show that it is able to consistently improve the quality of fit in less time than a single–threaded implementation.Our software is available at https://github.com/sys-bio/sabaody under a Apache 2.0 license.Contactmailto:[email protected]


Sign in / Sign up

Export Citation Format

Share Document