scholarly journals Evaluation and optimization of ASM1 parameters using large-scale WWTP monitoring data from a subtropical climate region in Brazil

Author(s):  
A. C. O. Martins ◽  
M. C. A. Silva ◽  
A. D. Benetti

Abstract This study aimed at providing a set of optimal kinetic and stoichiometric parameters of ASM1 representative of wastewater from a subtropical climate region in Brazil. ASM1 was applied on STOAT program, and the model parameters were evaluated and optimized with sensitivity analysis and Response Surface Methodology (RSM) to reach minimum prediction errors of effluent TSS, COD, and NH3. Six sensitive parameters were identified: YH, YA, μA, KNH, bA, and kOA. Predictions of RSM regression models were strongly correlated to the STOAT predictions. YH mainly affected TSS and COD, and the other parameters affected NH3. ASM1 calibration with estimated optimal values of sensitive parameters resulted in approximately null prediction errors for modeling state variables. NH3 presented similar results in the ASM1 validation; meanwhile, TSS and COD presented high errors related to the increase in YH due to the RSM optimization. The optimal parameters, mainly YA, μA, KNH, bA, and kOA, constitute references for other studies on ASM1 modeling using wastewater data from a subtropical climate region. YH optimal value should be evaluated as well as the effect of sludge wastage methods and the simulation periods.

2013 ◽  
Vol 10 (3) ◽  
pp. 2835-2878
Author(s):  
A. Hartmann ◽  
M. Weiler ◽  
T. Wagener ◽  
J. Lange ◽  
M. Kralik ◽  
...  

Abstract. More than 30% of Europe's land surface is made up of karst exposures. In some countries, water from karst aquifers constitutes almost half of the drinking water supply. Hydrological simulation models can predict the large-scale impact of future environmental change on hydrological variables. However, the information needed to obtain model parameters is not available everywhere and regionalisation methods have to be applied. The responsive behaviour of hydrological systems can be quantified by individual metrics, so-called system signatures. This study explores their value for distinguishing the dominant processes and properties of five different karst systems in Europe and the Middle East with the overall aim of regionalising system signatures and model parameters to ungauged karst areas. By defining ten system signatures derived from hydrodynamic and hydrochemical observations, a process-based karst model is applied to the five karst systems. In a stepwise model evaluation strategy, optimum parameters and their sensitivity are identified using automatic calibration and global variance-based sensitivity analysis. System signatures and sensitive parameters serve as proxies for dominant processes and optimised parameters are used to determine system properties. To test the transferability of the signatures, they are compared with the optimised model parameters and simple climatic and topographic descriptors of the five karst systems. By sensitivity analysis, the set of system signatures was able to distinguish the karst systems from one another by providing separate information about dominant soil, epikarst, and fast and slow groundwater flow processes. Comparing sensitive parameters to the system signatures revealed that annual discharge can serve as a proxy for the recharge area, that the slopes of the high flow parts of the flow duration curves correlate with the fast flow storage constant, and that the dampening of the isotopic signal of the rain as well as the medium flow parts of the flow duration curves have a non-linear relation to the distribution of groundwater dynamics. Even though, only weak correlations between system signatures and climatic and topographic factors could be found, our approach enabled us to identify dominant processes of the different systems and to provide directions for future large-scale simulation of karst areas to predict the impact of future change on karst water resources.


2021 ◽  
Author(s):  
Rachit Dubey ◽  
Mark K Ho ◽  
Hermish Mehta ◽  
Tom Griffiths

Psychologists have long been fascinated with understanding the nature of Aha! moments, moments when we transition from not knowing to suddenly realizing the solution to a problem. In this work, we present a theoretical framework that explains when and why we experience Aha! moments. Our theory posits that during problem-solving, in addition to solving the problem, people also maintain a meta-cognitive model of their ability to solve the problem as well as a prediction about the time it would take them to solve that problem. Aha! moments arise when we experience a positive error in this meta-cognitive prediction, i.e. when we solve a problem much faster than we expected to solve it. We posit that this meta-cognitive error is analogous to a positive reward prediction error thereby explaining why we feel so good after an Aha! moment. A large-scale pre-registered experiment on anagram solving supports this theory, showing that people's time prediction errors are strongly correlated with their ratings of an Aha! experience while solving anagrams. A second experiment provides further evidence to our theory by demonstrating a causal link between time prediction errors and the Aha! experience. These results highlight the importance of meta-cognitive prediction errors and deepen our understanding of human meta-reasoning.


2013 ◽  
Vol 17 (8) ◽  
pp. 3305-3321 ◽  
Author(s):  
A. Hartmann ◽  
M. Weiler ◽  
T. Wagener ◽  
J. Lange ◽  
M. Kralik ◽  
...  

Abstract. More than 30% of Europe's land surface is made up of karst exposures. In some countries, water from karst aquifers constitutes almost half of the drinking water supply. Hydrological simulation models can predict the large-scale impact of future environmental change on hydrological variables. However, the information needed to obtain model parameters is not available everywhere and regionalisation methods have to be applied. The responsive behaviour of hydrological systems can be quantified by individual metrics, so-called system signatures. This study explores their value for distinguishing the dominant processes and properties of five different karst systems in Europe and the Middle East. By defining ten system signatures derived from hydrodynamic and hydrochemical observations, a process-based karst model is applied to the five karst systems. In a stepwise model evaluation strategy, optimum parameters and their sensitivity are identified using automatic calibration and global variance-based sensitivity analysis. System signatures and sensitive parameters serve as proxies for dominant processes, and optimised parameters are used to determine system properties. By sensitivity analysis, the set of system signatures was able to distinguish the karst systems from one another by providing separate information about dominant soil, epikarst, and fast and slow groundwater flow processes. Comparing sensitive parameters to the system signatures revealed that annual discharge can serve as a proxy for the recharge area, that the slopes of the high flow parts of the flow duration curves correlate with the fast flow storage constant, and that the dampening of the isotopic signal of the rain as well as the medium flow parts of the flow duration curves have a non-linear relation to the distribution of groundwater storage constants that represent the variability of groundwater flow dynamics. Our approach enabled us to identify dominant processes of the different systems and provided directions for future large-scale simulation of karst areas to predict the impact of future change on karst water resources.


1996 ◽  
Vol 33 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Jian Hua Lei ◽  
Wolfgang Schilling

Physically-based urban rainfall-runoff models are mostly applied without parameter calibration. Given some preliminary estimates of the uncertainty of the model parameters the associated model output uncertainty can be calculated. Monte-Carlo simulation followed by multi-linear regression is used for this analysis. The calculated model output uncertainty can be compared to the uncertainty estimated by comparing model output and observed data. Based on this comparison systematic or spurious errors can be detected in the observation data, the validity of the model structure can be confirmed, and the most sensitive parameters can be identified. If the calculated model output uncertainty is unacceptably large the most sensitive parameters should be calibrated to reduce the uncertainty. Observation data for which systematic and/or spurious errors have been detected should be discarded from the calibration data. This procedure is referred to as preliminary uncertainty analysis; it is illustrated with an example. The HYSTEM program is applied to predict the runoff volume from an experimental catchment with a total area of 68 ha and an impervious area of 20 ha. Based on the preliminary uncertainty analysis, for 7 of 10 events the measured runoff volume is within the calculated uncertainty range, i.e. less than or equal to the calculated model predictive uncertainty. The remaining 3 events include most likely systematic or spurious errors in the observation data (either in the rainfall or the runoff measurements). These events are then discarded from further analysis. After calibrating the model the predictive uncertainty of the model is estimated.


Author(s):  
Marcello Pericoli ◽  
Marco Taboga

Abstract We propose a general method for the Bayesian estimation of a very broad class of non-linear no-arbitrage term-structure models. The main innovation we introduce is a computationally efficient method, based on deep learning techniques, for approximating no-arbitrage model-implied bond yields to any desired degree of accuracy. Once the pricing function is approximated, the posterior distribution of model parameters and unobservable state variables can be estimated by standard Markov Chain Monte Carlo methods. As an illustrative example, we apply the proposed techniques to the estimation of a shadow-rate model with a time-varying lower bound and unspanned macroeconomic factors.


2017 ◽  
Vol 65 (4) ◽  
pp. 479-488 ◽  
Author(s):  
A. Boboń ◽  
A. Nocoń ◽  
S. Paszek ◽  
P. Pruski

AbstractThe paper presents a method for determining electromagnetic parameters of different synchronous generator models based on dynamic waveforms measured at power rejection. Such a test can be performed safely under normal operating conditions of a generator working in a power plant. A generator model was investigated, expressed by reactances and time constants of steady, transient, and subtransient state in the d and q axes, as well as the circuit models (type (3,3) and (2,2)) expressed by resistances and inductances of stator, excitation, and equivalent rotor damping circuits windings. All these models approximately take into account the influence of magnetic core saturation. The least squares method was used for parameter estimation. There was minimized the objective function defined as the mean square error between the measured waveforms and the waveforms calculated based on the mathematical models. A method of determining the initial values of those state variables which also depend on the searched parameters is presented. To minimize the objective function, a gradient optimization algorithm finding local minima for a selected starting point was used. To get closer to the global minimum, calculations were repeated many times, taking into account the inequality constraints for the searched parameters. The paper presents the parameter estimation results and a comparison of the waveforms measured and calculated based on the final parameters for 200 MW and 50 MW turbogenerators.


2020 ◽  
pp. 1-11
Author(s):  
Hui Wang ◽  
Huang Shiwang

The various parts of the traditional financial supervision and management system can no longer meet the current needs, and further improvement is urgently needed. In this paper, the low-frequency data is regarded as the missing of the high-frequency data, and the mixed frequency VAR model is adopted. In order to overcome the problems caused by too many parameters of the VAR model, this paper adopts the Bayesian estimation method based on the Minnesota prior to obtain the posterior distribution of each parameter of the VAR model. Moreover, this paper uses methods based on Kalman filtering and Kalman smoothing to obtain the posterior distribution of latent state variables. Then, according to the posterior distribution of the VAR model parameters and the posterior distribution of the latent state variables, this paper uses the Gibbs sampling method to obtain the mixed Bayes vector autoregressive model and the estimation of the state variables. Finally, this article studies the influence of Internet finance on monetary policy with examples. The research results show that the method proposed in this article has a certain effect.


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4638
Author(s):  
Simon Pratschner ◽  
Pavel Skopec ◽  
Jan Hrdlicka ◽  
Franz Winter

A revolution of the global energy industry is without an alternative to solving the climate crisis. However, renewable energy sources typically show significant seasonal and daily fluctuations. This paper provides a system concept model of a decentralized power-to-green methanol plant consisting of a biomass heating plant with a thermal input of 20 MWth. (oxyfuel or air mode), a CO2 processing unit (DeOxo reactor or MEA absorption), an alkaline electrolyzer, a methanol synthesis unit, an air separation unit and a wind park. Applying oxyfuel combustion has the potential to directly utilize O2 generated by the electrolyzer, which was analyzed by varying critical model parameters. A major objective was to determine whether applying oxyfuel combustion has a positive impact on the plant’s power-to-liquid (PtL) efficiency rate. For cases utilizing more than 70% of CO2 generated by the combustion, the oxyfuel’s O2 demand is fully covered by the electrolyzer, making oxyfuel a viable option for large scale applications. Conventional air combustion is recommended for small wind parks and scenarios using surplus electricity. Maximum PtL efficiencies of ηPtL,Oxy = 51.91% and ηPtL,Air = 54.21% can be realized. Additionally, a case study for one year of operation has been conducted yielding an annual output of about 17,000 t/a methanol and 100 GWhth./a thermal energy for an input of 50,500 t/a woodchips and a wind park size of 36 MWp.


2016 ◽  
Vol 39 (4) ◽  
pp. 579-588 ◽  
Author(s):  
Yulong Huang ◽  
Yonggang Zhang ◽  
Ning Li ◽  
Lin Zhao

In this paper, a theoretical comparison between existing the sigma-point information filter (SPIF) framework and the unscented information filter (UIF) framework is presented. It is shown that the SPIF framework is identical to the sigma-point Kalman filter (SPKF). However, the UIF framework is not identical to the classical SPKF due to the neglect of one-step prediction errors of measurements in the calculation of state estimation error covariance matrix. Thus SPIF framework is more reasonable as compared with UIF framework. According to the theoretical comparison, an improved cubature information filter (CIF) is derived based on the superior SPIF framework. Square-root CIF (SRCIF) is also developed to improve the numerical accuracy and stability of the proposed CIF. The proposed SRCIF is applied to a target tracking problem with large sampling interval and high turn rate, and its performance is compared with the existing SRCIF. The results show that the proposed SRCIF is more reliable and stable as compared with the existing SRCIF. Note that it is impractical for information filters in large-scale applications due to the enormous computational complexity of large-scale matrix inversion, and advanced techniques need to be further considered.


Sign in / Sign up

Export Citation Format

Share Document