scholarly journals A Bayesian sequential updating approach to predict phenology of silage maize

2021 ◽  
Author(s):  
Michelle Viswanathan ◽  
Tobias K. D. Weber ◽  
Sebastian Gayler ◽  
Juliane Mai ◽  
Thilo Streck

Abstract. Crop models are tools used for predicting year to year crop development on field to regional scales. However, robust predictions are hampered by factors such as uncertainty in crop model parameters and in the data used for calibration. Bayesian calibration allows for the estimation of model parameters and quantification of uncertainties, with the consideration of prior information. In this study, we used a Bayesian sequential updating (BSU) approach to progressively incorporate additional data at a yearly time-step to calibrate a phenology model (SPASS) while analysing changes in parameter uncertainty and prediction quality. We used field measurements of silage maize grown between 2010 and 2016 in the regions of Kraichgau and Swabian Alb in southwestern Germany. Parameter uncertainty and model prediction errors were expected to progressively reduce to a final, irreducible value. Parameter uncertainty reduced as expected with the sequential updates. For two sequences using synthetic data, one in which the model was able to accurately simulate the observations, and the other in which a single cultivar was grown under the same environmental conditions, prediction error mostly reduced. However, in the true sequences that followed the actual chronological order of cultivation by the farmers in the two regions, prediction error increased when the calibration data was not representative of the validation data. This could be explained by differences in ripening group and temperature conditions during vegetative growth. With implications for manual and automatic data streams and model updating, our study highlights that the success of Bayesian methods for predictions depends on a comprehensive understanding of inherent structure in the observation data and model limitations.

1996 ◽  
Vol 33 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Jian Hua Lei ◽  
Wolfgang Schilling

Physically-based urban rainfall-runoff models are mostly applied without parameter calibration. Given some preliminary estimates of the uncertainty of the model parameters the associated model output uncertainty can be calculated. Monte-Carlo simulation followed by multi-linear regression is used for this analysis. The calculated model output uncertainty can be compared to the uncertainty estimated by comparing model output and observed data. Based on this comparison systematic or spurious errors can be detected in the observation data, the validity of the model structure can be confirmed, and the most sensitive parameters can be identified. If the calculated model output uncertainty is unacceptably large the most sensitive parameters should be calibrated to reduce the uncertainty. Observation data for which systematic and/or spurious errors have been detected should be discarded from the calibration data. This procedure is referred to as preliminary uncertainty analysis; it is illustrated with an example. The HYSTEM program is applied to predict the runoff volume from an experimental catchment with a total area of 68 ha and an impervious area of 20 ha. Based on the preliminary uncertainty analysis, for 7 of 10 events the measured runoff volume is within the calculated uncertainty range, i.e. less than or equal to the calculated model predictive uncertainty. The remaining 3 events include most likely systematic or spurious errors in the observation data (either in the rainfall or the runoff measurements). These events are then discarded from further analysis. After calibrating the model the predictive uncertainty of the model is estimated.


2016 ◽  
Vol 20 (5) ◽  
pp. 1925-1946 ◽  
Author(s):  
Nikolaj Kruse Christensen ◽  
Steen Christensen ◽  
Ty Paul A. Ferre

Abstract. In spite of geophysics being used increasingly, it is often unclear how and when the integration of geophysical data and models can best improve the construction and predictive capability of groundwater models. This paper uses a newly developed HYdrogeophysical TEst-Bench (HYTEB) that is a collection of geological, groundwater and geophysical modeling and inversion software to demonstrate alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity (clay). The synthetic 3-D reference system is designed so that there is a perfect relationship between hydraulic conductivity and electrical resistivity. For this system it is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by (in most cases) geophysics-based regularization. For the studied system and inversion approaches it is found that resistivities estimated by sequential hydrogeophysical inversion (SHI) or joint hydrogeophysical inversion (JHI) should be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. The limited groundwater model improvement obtained by using the geophysical data probably mainly arises from the way these data are used here: the alternative inversion approaches propagate geophysical estimation errors into the hydrologic model parameters. It was expected that JHI would compensate for this, but the hydrologic data were apparently insufficient to secure such compensation. With respect to reducing model prediction error, it depends on the type of prediction whether it has value to include geophysics in a joint or sequential hydrogeophysical model calibration. It is found that all calibrated models are good predictors of hydraulic head. When the stress situation is changed from that of the hydrologic calibration data, then all models make biased predictions of head change. All calibrated models turn out to be very poor predictors of the pumping well's recharge area and groundwater age. The reason for this is that distributed recharge is parameterized as depending on estimated hydraulic conductivity of the upper model layer, which tends to be underestimated. Another important insight from our analysis is thus that either recharge should be parameterized and estimated in a different way, or other types of data should be added to better constrain the recharge estimates.


2015 ◽  
Vol 12 (9) ◽  
pp. 9599-9653 ◽  
Author(s):  
N. K. Christensen ◽  
S. Christensen ◽  
T. P. A. Ferre

Abstract. Despite geophysics is being used increasingly, it is still unclear how and when the integration of geophysical data improves the construction and predictive capability of groundwater models. Therefore, this paper presents a newly developed HYdrogeophysical TEst-Bench (HYTEB) which is a collection of geological, groundwater and geophysical modeling and inversion software wrapped to make a platform for generation and consideration of multi-modal data for objective hydrologic analysis. It is intentionally flexible to allow for simple or sophisticated treatments of geophysical responses, hydrologic processes, parameterization, and inversion approaches. It can also be used to discover potential errors that can be introduced through petrophysical models and approaches to correlating geophysical and hydrologic parameters. With HYTEB we study alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity. It is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by regularization. For purely hydrologic inversion (HI, only using hydrologic data) we used Tikhonov regularization combined with singular value decomposition. For joint hydrogeophysical inversion (JHI) and sequential hydrogeophysical inversion (SHI) the resistivity estimates from TEM are used together with a petrophysical relationship to formulate the regularization term. In all cases, the regularization stabilizes the inversion, but neither the HI nor the JHI objective function could be minimized uniquely. SHI or JHI with regularization based on the use of TEM data produced estimated hydraulic conductivity fields that bear more resemblance to the reference fields than when using HI with Tikhonov regularization. However, for the studied system the resistivities estimated by SHI or JHI must be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. Much of the lack of value of the geophysical data arises from a mistaken faith in the power of the petrophysical model in combination with geophysical data of low sensitivity, thereby propagating geophysical estimation errors into the hydrologic model parameters. With respect to reducing model prediction error, it depends on the type of prediction whether it has value to include geophysical data in the model calibration. It is found that all calibrated models are good predictors of hydraulic head. When the stress situation is changed from that of the hydrologic calibration data, then all models make biased predictions of head change. All calibrated models turn out to be a very poor predictor of the pumping well's recharge area and groundwater age. The reason for this is that distributed recharge is parameterized as depending on estimated hydraulic conductivity of the upper model layer which tends to be underestimated. Another important insight from the HYTEB analysis is thus that either recharge should be parameterized and estimated in a different way, or other types of data should be added to better constrain the recharge estimates.


2012 ◽  
Vol 16 (2) ◽  
pp. 603-629 ◽  
Author(s):  
T. Krauße ◽  
J. Cullmann

Abstract. The development of methods for estimating the parameters of hydrologic models considering uncertainties has been of high interest in hydrologic research over the last years. In particular methods which understand the estimation of hydrologic model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008) presented a first Robust Parameter Estimation Method (ROPE) and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. The basic idea of this algorithm is to identify a set of model parameter vectors with high model performance called good parameters and subsequently generate a set of parameter vectors with high data depth with respect to the first set. Both steps are repeated iteratively until a stopping criterion is met. The results estimated in this case study show the high potential of the principle of data depth to be used for the estimation of hydrologic model parameters. In this paper we present some further developments that address the most important shortcomings of the original ROPE approach. We developed a stratified depth based sampling approach that improves the sampling from non-elliptic and multi-modal distributions. It provides a higher efficiency for the sampling of deep points in parameter spaces with higher dimensionality. Another modification addresses the problem of a too strong shrinking of the estimated set of robust parameter vectors that might lead to overfitting for model calibration with a small amount of calibration data. This contradicts the principle of robustness. Therefore, we suggest to split the available calibration data into two sets and use one set to control the overfitting. All modifications were implemented into a further developed ROPE approach that is called Advanced Robust Parameter Estimation (AROPE). However, in this approach the estimation of the good parameters is still based on an ineffective Monte Carlo approach. Therefore we developed another approach called ROPE with Particle Swarm Optimisation (ROPE-PSO) that substitutes the Monte Carlo approach with a more effective and efficient approach based on Particle Swarm Optimisation. Two case studies demonstrate the improvements of the developed algorithms when compared with the first ROPE approach and two other classical optimisation approaches calibrating a process oriented hydrologic model with hourly time step. The focus of both case studies is on modelling flood events in a small catchment characterised by extreme process dynamics. The calibration problem was repeated with higher dimensionality considering the uncertainty in the soil hydraulic parameters and another conceptual parameter of the soil module. We discuss the estimated results and propose further possibilities in order to apply ROPE as a well-founded parameter estimation and uncertainty analysis tool.


2016 ◽  
Vol 27 (20) ◽  
pp. 2721-2743 ◽  
Author(s):  
Søren Enemark ◽  
Ilmar F Santos ◽  
Marcelo A Savi

The thermo-mechanical behaviour of pseudoelastic shape memory alloy helical springs is of concern discussing stabilised and cyclic responses. Constitutive description of the shape memory alloy is based on the framework developed by Lagoudas and co-workers incorporating two modifications related to hardening and sub-loop functions designated by Bézier curves. The spring model takes into account both bending and torsion of the spring wire, thus representing geometrical non-linearities. Simplified models are explored showing that a single point in the wire cross section is enough to represent the global spring behaviour in spite of complex stress–strain distributions. The experiments are carried out considering different deflection amplitudes, frequencies and ambient temperatures, which influence the spring behaviour to different extents. The model is fitted against a calibration data set resulting in 1.3% residual standard deviation relative to the full range force. Compared to the validation data set, the errors are below 10% relative to the full range of the complex modulus. Uncertainty analysis of the model parameters using a Markov chain Monte Carlo technique shows low to high parameter correlation, and the relative uncertainties are less than ±12%. Both the heat capacity and the convection coefficient are clearly identifiable from the performed experiments.


Energies ◽  
2021 ◽  
Vol 14 (17) ◽  
pp. 5561
Author(s):  
Sergey Obukhov ◽  
Ahmed Ibrahim ◽  
Denis Y. Davydov ◽  
Talal Alharbi ◽  
Emad M. Ahmed ◽  
...  

The primary task of the design and feasibility study for the use of wind power plants is to predict changes in wind speeds at the site of power system installation. The stochastic nature of the wind and spatio-temporal variability explains the high complexity of this problem, associated with finding the best mathematical modeling which satisfies the best solution for this problem. In the known discrete models based on Markov chains, the autoregressive-moving average does not allow variance in the time step, which does not allow their use for simulation of operating modes of wind turbines and wind energy systems. The article proposes and tests a SDE-based model for generating synthetic wind speed data using the stochastic differential equation of the fractional Ornstein-Uhlenbeck process with periodic function of long-run mean. The model allows generating wind speed trajectories with a given autocorrelation, required statistical distribution and provides the incorporation of daily and seasonal variations. Compared to the standard Ornstein-Uhlenbeck process driven by ordinary Brownian motion, the fractional model used in this study allows one to generate synthetic wind speed trajectories which autocorrelation function decays according to a power law that more closely matches the hourly autocorrelation of actual data. In order to demonstrate the capabilities of this model, a number of simulations were carried out using model parameters estimated from actual observation data of wind speed collected at 518 weather stations located throughout Russia.


2011 ◽  
Vol 66-68 ◽  
pp. 268-272
Author(s):  
Gui Yun Yan ◽  
Zheng Zhang

This paper presents a predictive control strategy for seismic protection of a benchmark cable-stayed bridge with consideration of multiple-support excitations. In this active control strategy, a multi-step predictive model is built to estimate the seismic dynamics of cable-stayed bridge and the effects of some complicated factors such as time-varying, model mismatching, disturbances and uncertainty of controlled system, are taken into account by the prediction error feedback in the multi-step predictive model. The prediction error is that the actual system output is compared to the model prediction at each time step. Numerical simulation is carried out for analyzing the seismic responses of the controlled cable-stayed bridge and the results show that the developed predictive control strategy can reduce the seismic response of benchmark cable-stayed bridge efficiently.


1997 ◽  
Vol 77 (3) ◽  
pp. 333-344 ◽  
Author(s):  
M. I. Sheppard ◽  
D. E. Elrick ◽  
S. R. Peterson

The nuclear industry uses computer models to calculate and assess the impact of its present and future releases to the environment, both from operating reactors and from existing licensed and planned waste management facilities. We review four soil models varying in complexity that could be useful for environmental impact assessment. The goal of this comparison is to direct the combined use of these models in order to preserve simplicity, yet increase the rigor of Canadian environmental assessment calculations involving soil transport pathways. The four models chosen are: the Soil Chemical Exchange and Migration of Radionuclides (SCEMR1) model; the Baes and Sharp/Preclosure PREAC soil model, both used in Canada's nuclear fuel waste management program; the Convection-Dispersion Equation (CDE) model, commonly used in contaminant transport applications; and the Canadian Standards Association (CSA) derived release limit model used for normal operations at nuclear facilities. We discuss how each model operates, its timestep and depth increment options and the limitations of each of the models. Major model assumptions are discussed and the performance of these models is compared quantitatively for a scenario involving surface deposition or irrigation. A sensitivity analysis of the CDE model illustrates the influence of the important model parameters: the amount of infiltrating water, V; the hydrodynamic dispersion coefficient, D; and the soil retention or partition coefficient, Kd. The important parameters in the other models are also identified. This work shows we need tested, robust, mechanistic unsaturated soil models with easily understood and measurable inputs, including data for the sensitive or important model parameters for Canada's priority contaminants. Soil scientists need to assist industry and its regulators by recommending a selection of models and supporting them with the provision of validation data to ensure high-quality environmental risk assessments are carried out in Canada. Key words: Soil transport models, environmental impact assessments, model structure, complexity and performance, radionuclides 137Cs, 90Sr, 129I


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5549
Author(s):  
Ossi Kaltiokallio ◽  
Roland Hostettler ◽  
Hüseyin Yiğitler ◽  
Mikko Valkama

Received signal strength (RSS) changes of static wireless nodes can be used for device-free localization and tracking (DFLT). Most RSS-based DFLT systems require access to calibration data, either RSS measurements from a time period when the area was not occupied by people, or measurements while a person stands in known locations. Such calibration periods can be very expensive in terms of time and effort, making system deployment and maintenance challenging. This paper develops an Expectation-Maximization (EM) algorithm based on Gaussian smoothing for estimating the unknown RSS model parameters, liberating the system from supervised training and calibration periods. To fully use the EM algorithm’s potential, a novel localization-and-tracking system is presented to estimate a target’s arbitrary trajectory. To demonstrate the effectiveness of the proposed approach, it is shown that: (i) the system requires no calibration period; (ii) the EM algorithm improves the accuracy of existing DFLT methods; (iii) it is computationally very efficient; and (iv) the system outperforms a state-of-the-art adaptive DFLT system in terms of tracking accuracy.


2021 ◽  
Vol 21 (10) ◽  
pp. 263
Author(s):  
Yun-Chuan Xiang ◽  
Ze-Jun Jiang ◽  
Yun-Yong Tang

Abstract In this work, we reanalyzed 11 years of spectral data from the Fermi Large Area Telescope (Fermi-LAT) of currently observed starburst galaxies (SBGs) and star-forming galaxies (SFGs). We used a one-zone model provided by NAIMA and the hadronic origin to explain the GeV observation data of the SBGs and SFGs. We found that a protonic distribution of a power-law form with an exponential cutoff can explain the spectra of most SBGs and SFGs. However, it cannot explain the spectral hardening components of NGC 1068 and NGC 4945 in the GeV energy band. Therefore, we considered the two-zone model to well explain these phenomena. We summarized the features of two model parameters, including the spectral index, cutoff energy, and proton energy budget. Similar to the evolution of supernova remnants (SNRs) in the Milky Way, we estimated the protonic acceleration limitation inside the SBGs to be the order of 102 TeV using the one-zone model; this is close to those of SNRs in the Milky Way.


Sign in / Sign up

Export Citation Format

Share Document