A Comparison of Two Peanut Growth Models for Oklahoma

1988 ◽  
Vol 15 (1) ◽  
pp. 30-35 ◽  
Author(s):  
G. D. Grosz ◽  
R. L. Elliott ◽  
J. H. Young

Abstract Growth simulation models provide potential benefit in the study of peanut (Arachis hypogaea L.) production. Two physiologically-based peanut simulation models of varying complexity were adapted and calibrated to simulate the growth and yield of Spanish peanut under Oklahoma conditions. Field data, including soil moisture measurements and sequential yield samples, were collected at four sites during the 1985 growing season. An automated weather station provided the necessary climatic data for the models. PNUTMOD, the simpler model originally developed for educational purposes, requires seven varietal input parameters in addition to temperature and solar radiation data. The seven model parameters were calibrated using data from two of the four field sites, and model performance was evaluated using the remaining two data sets. The more complex model, PEANUT, simulates individual plant physiological processes and utilizes a considerably larger set of input parameters. Since PEANUT was developed for the Virginia type peanut, several input parameters required adjustment for the Spanish type peanut grown in Oklahoma. PEANUT was calibrated using data from all four study sites. Both models performed well in simulating pod yield. PNUTMOD, which does not allow for leaf senescence, did not perform as well as PEANUT in predicting vegetative growth.

Forests ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 412
Author(s):  
Ivan Bjelanovic ◽  
Phil Comeau ◽  
Sharon Meredith ◽  
Brian Roth

A few studies in young mixedwood stands demonstrate that precommercial thinning of aspen at early ages can improve the growth of spruce and increase stand resilience to drought. However, information on tree and stand responses to thinning in older mixedwood stands is lacking. To address this need, a study was initiated in 2008 in Alberta, Canada in 14 boreal mixedwood stands (seven each at ages 17 and 22). This study investigated growth responses following thinning of aspen to five densities (0, 1000, 2500, 5000 stems ha−1 and unthinned (control)). Measurements were collected in the year of establishment, and three and eight years later. Mortality of aspen in the unthinned plots was greater than in the thinned plots which were not significantly different amongst each other. Eight years following treatment, aspen diameter was positively influenced by thinning, while there was no effect on aspen height. The density of aspen had no significant effect on the survival of planted spruce. Spruce height and diameter growth increased with both aspen thinning intensity and time since treatment. Differentiation among treatments in spruce diameter growth was evident three years from treatment, while differentiation in height was not significant until eight years following treatment. Yield projections using two growth models (Mixedwood Growth Model (MGM) and Growth and Yield Projection System (GYPSY)) were initialized using data from the year eight re-measurements. Results indicate that heavy precommercial aspen thinning (to ~1000 aspen crop trees ha−1) can result in an increase in conifer merchantable volume without reducing aspen volume at the time of harvest. However, light to moderate thinning (to ~2500 aspen stems ha−1 or higher), is unlikely to result in gains in either deciduous or conifer merchantable harvest volume over those of unthinned stands.


2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


2017 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes have led to the formulation of the multiple hypotheses approach in hydrological modelling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step-by-step. This study tackles the problem the other way around: We start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash-Sutcliffe, percentage bias and the ratio between root mean square error to the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 with the final Model 15 and find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and less model parameters.


2017 ◽  
Author(s):  
Iris Kriest

Abstract. The assessment of the ocean biota's role in climate climate change is often carried out with global biogeochemical ocean models that contain many components, and involve a high level of parametric uncertainty. Examination the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common, but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types – a complex seven-component model (MOPS), and a very simple two-component model (RetroMOPS) – for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model, which contains only nutrients and dissolved organic phosphorus (DOP), is sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias as a useful additional constraint on model parameters. Dissolved organic phosphorous at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer – although difficult to measure – may be an important asset for model calibration.


2017 ◽  
Vol 18 (8) ◽  
pp. 2215-2225 ◽  
Author(s):  
Andrew J. Newman ◽  
Naoki Mizukami ◽  
Martyn P. Clark ◽  
Andrew W. Wood ◽  
Bart Nijssen ◽  
...  

Abstract The concepts of model benchmarking, model agility, and large-sample hydrology are becoming more prevalent in hydrologic and land surface modeling. As modeling systems become more sophisticated, these concepts have the ability to help improve modeling capabilities and understanding. In this paper, their utility is demonstrated with an application of the physically based Variable Infiltration Capacity model (VIC). The authors implement VIC for a sample of 531 basins across the contiguous United States, incrementally increase model agility, and perform comparisons to a benchmark. The use of a large-sample set allows for statistically robust comparisons and subcategorization across hydroclimate conditions. Our benchmark is a calibrated, time-stepping, conceptual hydrologic model. This model is constrained by physical relationships such as the water balance, and it complements purely statistical benchmarks due to the increased physical realism and permits physically motivated benchmarking using metrics that relate one variable to another (e.g., runoff ratio). The authors find that increasing model agility along the parameter dimension, as measured by the number of model parameters available for calibration, does increase model performance for calibration and validation periods relative to less agile implementations. However, as agility increases, transferability decreases, even for a complex model such as VIC. The benchmark outperforms VIC in even the most agile case when evaluated across the entire basin set. However, VIC meets or exceeds benchmark performance in basins with high runoff ratios (greater than ~0.8), highlighting the ability of large-sample comparative hydrology to identify hydroclimatic performance variations.


Author(s):  
Gernoth Götz ◽  
Oldrich Polach

This article presents an evaluation of the model validation method that was provided as the output of the European research project DynoTRAIN and implemented in the recently revised European standard EN 14363. The input parameters of the validation method, namely the section length, number, and selection of sections as well as the selected parameters of the simulation models are varied. The evaluation shows that a single section that provides a large deviation between simulation and measurement can, in rare cases, influence the results of the overall validation. Nevertheless, the investigations demonstrate a good robustness, as the final validation result is very rarely influenced by the variation of sections selected for validation, by the use of a higher number of sections than the minimum of 12, or by longer sections than that specified for on-track tests in accordance with EN 14363. The validation methodology is also able to recognize the errors in vehicle model parameters, if the errors have a relevant influence on the behaviour of the running dynamics of the evaluated vehicle.


2017 ◽  
Vol 140 (2) ◽  
Author(s):  
Anton v. Beek ◽  
Mian Li ◽  
Chao Ren

Simulation models are widely used to describe processes that would otherwise be arduous to analyze. However, many of these models merely provide an estimated response of the real systems, as their input parameters are exposed to uncertainty, or partially excluded from the model due to the complexity, or lack of understanding of the problem's physics. Accordingly, the prediction accuracy can be improved by integrating physical observations into low fidelity models, a process known as model calibration or model fusion. Typical model fusion techniques are essentially concerned with how to allocate information-rich data points to improve the model accuracy. However, methods on subtracting more information from already available data points have been starving attention. Subsequently, in this paper we acknowledge the dependence between the prior estimation of input parameters and the actual input parameters. Accordingly, the proposed framework subtracts the information contained in this relation to update the estimated input parameters and utilizes it in a model updating scheme to accurately approximate the real system outputs that are affected by all real input parameters (RIPs) of the problem. The proposed approach can effectively use limited experimental samples while maintaining prediction accuracy. It basically tweaks model parameters to update the computer simulation model so that it can match a specific set of experimental results. The significance and applicability of the proposed method is illustrated through comparison with a conventional model calibration scheme using two engineering examples.


1990 ◽  
Vol 20 (8) ◽  
pp. 1149-1155 ◽  
Author(s):  
Bahman Shafii ◽  
James A. Moore ◽  
James D. Newberry

Diameter-increment models for nitrogen-fertilized stands were developed using data from permanent research plots in northern Idaho. The equations partially resembled PROGNOSIS model diameter growth formulations. Results indicated that both initial tree size and initial stand density produced significant interactions with treatment to explain an individual tree's response to fertilization. Larger trees in a stand showed more fertilization response than smaller trees. Furthermore, individual trees in low-density stands showed more fertilization response than those growing in high-density stands. These diameter increment predictive equations were formulated to be compatible with individual-tree distance-independent simulation models.


2017 ◽  
Vol 14 (21) ◽  
pp. 4965-4984 ◽  
Author(s):  
Iris Kriest

Abstract. The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types – a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) – for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer – although difficult to measure – may be an important asset for model calibration.


2003 ◽  
Vol 33 (3) ◽  
pp. 455-465 ◽  
Author(s):  
Jari Hynynen ◽  
Risto Ojansuu

The study addresses the effect of sample plot size on the bias related to measured stand density. We analyzed the effect of plot size on model coefficients and model performance in the simulation. Alternative growth models were developed for Norway spruce (Picea abies (L.) Karst.) on the basis of data obtained from permanent inventory sample plots of varying size. The competition measures were estimated from small plots with an average radius of 6 m, large plots with an average radius of 10 m, a cluster of three small plots within a stand, and a cluster of three large plots within a stand. The response of the models to competition varied depending on the plot size. Increasing the plot size increased the sensitivity of the models to the variation of overall stand density and the competitive status of a tree. The development of repeatedly measured, unthinned and thinned Norway spruce sample plots was simulated with the models, and the predictions were compared with the observed development. In the unthinned stand, the model with competition measures based on small plots resulted in a higher and more biased prediction of growth and mortality than the models based on larger plots. In the thinned stand, the differences between the models were negligible.


Sign in / Sign up

Export Citation Format

Share Document