scholarly journals Using Evolutionary Programs to Maximize Minimum Temperature Forecast Skill

2015 ◽  
Vol 143 (5) ◽  
pp. 1506-1516 ◽  
Author(s):  
Paul J. Roebber

Abstract Evolutionary program ensembles are developed and tested for minimum temperature forecasts at Chicago, Illinois, at forecast ranges of 36, 60, 84, 108, 132, and 156 h. For all forecast ranges examined, the evolutionary program ensemble outperforms the 21-member GFS model output statistics (MOS) ensemble when considering root-mean-square error and Brier skill score. The relative advantage in root-mean-square error widens with forecast range, from 0.18°F at 36 h to 1.53°F at 156 h while the probabilistic skill remains positive throughout. At all forecast ranges, probabilistic forecasts of abnormal conditions are particularly skillful compared to the raw GFS guidance. The evolutionary program reliance on particular forecast inputs is distinct from that obtained from considering multiple linear regression models, with less reliance on the GFS MOS temperature and more on alternative data such as upstream temperatures at the time of forecast issuance, time of year, and forecasts of wind speed, precipitation, and cloud cover. This weighting trends away from current observations and toward seasonal (climatological) measures as forecast range increases. Using two different forms of ensemble member subselection, a Bayesian model combination calibration is tested on both ensembles. This calibration had limited effect on evolutionary program ensemble skill but was able to improve MOS ensemble performance, reducing but not eliminating the skill gap between them. The largest skill differentials occurred at the longest forecast ranges, beginning at 132 h. A hybrid, calibrated ensemble was able to provide some further increase in skill.

2021 ◽  
Vol 13 (9) ◽  
pp. 1630
Author(s):  
Yaohui Zhu ◽  
Guijun Yang ◽  
Hao Yang ◽  
Fa Zhao ◽  
Shaoyu Han ◽  
...  

With the increase in the frequency of extreme weather events in recent years, apple growing areas in the Loess Plateau frequently encounter frost during flowering. Accurately assessing the frost loss in orchards during the flowering period is of great significance for optimizing disaster prevention measures, market apple price regulation, agricultural insurance, and government subsidy programs. The previous research on orchard frost disasters is mainly focused on early risk warning. Therefore, to effectively quantify orchard frost loss, this paper proposes a frost loss assessment model constructed using meteorological and remote sensing information and applies this model to the regional-scale assessment of orchard fruit loss after frost. As an example, this article examines a frost event that occurred during the apple flowering period in Luochuan County, Northwestern China, on 17 April 2020. A multivariable linear regression (MLR) model was constructed based on the orchard planting years, the number of flowering days, and the chill accumulation before frost, as well as the minimum temperature and daily temperature difference on the day of frost. Then, the model simulation accuracy was verified using the leave-one-out cross-validation (LOOCV) method, and the coefficient of determination (R2), the root mean square error (RMSE), and the normalized root mean square error (NRMSE) were 0.69, 18.76%, and 18.76%, respectively. Additionally, the extended Fourier amplitude sensitivity test (EFAST) method was used for the sensitivity analysis of the model parameters. The results show that the simulated apple orchard fruit number reduction ratio is highly sensitive to the minimum temperature on the day of frost, and the chill accumulation and planting years before the frost, with sensitivity values of ≥0.74, ≥0.25, and ≥0.15, respectively. This research can not only assist governments in optimizing traditional orchard frost prevention measures and market price regulation but can also provide a reference for agricultural insurance companies to formulate plans for compensation after frost.


Author(s):  
Reza Norouzi ◽  
Parveen Sihag ◽  
Rasoul Daneshfaraz ◽  
John Abraham ◽  
Vadoud Hasannia

Abstract This study was designed to evaluate the ability of Artificial Intelligence (AI) methods including ANN, ANFIS, GRNN, SVM, GP, LR, and MLR to predict the relative energy dissipation(ΔE/Eu) for vertical drops equipped with a horizontal screen. For this study, 108 experiments were carried out to investigate energy dissipation. In the experiments, the discharge rate, drop height, and porosity of the screens were varied. Parameters yc/h, yd/yc, and p were input variables, and ΔE/Eu was the output variable. The efficiencies of the models were compared using the following metrics: correlation coefficient (CC), mean absolute error (MAE), root-mean-square error (RMSE), Normalized root mean square error (NRMSE) and Nash–Sutcliffe model efficiency (NSE). Results indicate that the performance of the ANFIS_gbellmf based model with a CC value of 0.9953, RMSE value of 0.0069, MAE value of 0.0042, NRMSE value as 0.0092 and NSE value as 0.9895 was superior to other applied models. Also, a linear regression yielded CC = 0.9933, RMSE = 0.0083, and MAE = 0.0067. This linear model outperformed multiple linear regression models. Results from a sensitivity study suggest that yc/h is the most effective parameter for predicting ΔE/Eu.


2021 ◽  
Author(s):  
Giulio Nils Caroletti ◽  
Tommaso Caloiero ◽  
Magnus Joelsson ◽  
Roberto Coscarelli

<p>Homogenization techniques and missing value reconstruction have grown in importance in climatology given their relevance in establishing coherent data records over which climate signals can be correctly attributed, discarding apparent changes depending on instrument inhomogeneities, e.g., change in instrumentation, location, time of measurement.</p><p>However, it is not generally possible to assess homogenized results directly, as true data values are not known. Thus, to validate homogenization techniques, artificially inhomogeneous datasets, also called benchmark datasets, are created from known homogeneous datasets. Results from their homogenization can be assessed and used to rank, evaluate and/or validate techniques used.</p><p>Considering temperature data, the aims of this work are: i) to determine which metrics (bias, absolute error, factor of exceedance, root mean squared error, and Pearson’s correlation coefficient) can be meaningfully used to validate the best-performing homogenization technique in a region; ii) to evaluate through a Pearson correlation analysis if homogenization techniques’ performance depends on physical features of a station (i.e., latitude, altitude and distance from the sea) or on the nature of the inhomogeneities (i.e., the number of break points and missing data).</p><p>With this aims, a southern Sweden temperature database with homogeneous, maximum and minimum temperature data from 100 ground stations over the period 1950-2005 has been used. Starting from these data, inhomogeneous datasets were created introducing up to 7 artificial breaks for each ground station and an average of 107 missing data. Then, 3 homogenization techniques were applied, ACMANT (Adapted Caussinus-Mestre Algorithm for Networks of Temperature series), and two versions of HOMER (HOMogenization software in R): the standard, automated setup mode (Standard-HOMER) and a manual setup developed and performed at the Swedish Meteorological and Hydrological Institute (SMHI-HOMER).</p><p>Results showed that root mean square error, absolute bias and factor of exceedance were the most useful metrics to evaluate improvements in the homogenized datasets: for instance, RMSE for both variables was reduced from an average of 0.71-0.89K (corrupted dataset) to 0.50-0.60K (Standard-HOMER), 0.51-0.61K (SMHI-HOMER) and 0.46-0.53K (ACMANT), respectively.</p><p>Globally, HOMER performed better regarding the factor of exceedance, while ACMANT outperformed it with regard to root mean square error and absolute error. Regardless of the technique used, the homogenization quality anti-correlated meaningfully to the number of breaks. Missing data did not seem to have an impact on HOMER, while it negatively affected ACMANT, because this method does not fill-in missing data in the same drastic way.</p><p>In general, the nature of the datasets had a more important role in yielding good homogenization results than associated physical parameters: only for minimum temperature, distance from the sea and altitude showed a weak but significant correlation with the factor of exceedance and the root mean square error.</p><p>This study has been performed within the INDECIS Project, that is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by FORMAS (SE), DLR (DE), BMWFW (AT), IFD (DK), MINECO (ES), ANR (FR) with co-funding by the European Union (Grant 690462).</p>


2015 ◽  
Vol 143 (2) ◽  
pp. 471-490 ◽  
Author(s):  
Paul J. Roebber

Abstract An ensemble forecast method using evolutionary programming, including various forms of genetic exchange, disease, mutation, and the training of solutions within ecological niches, is presented. A 2344-member ensemble generated in this way is tested for 60-h minimum temperature forecasts for Chicago, Illinois. The ensemble forecasts are superior in both ensemble average root-mean-square error and Brier skill score to those obtained from a 21-member operational ensemble model output statistics (MOS) forecast. While both ensembles are underdispersive, spread calibration produces greater gains in probabilistic skill for the evolutionary program ensemble than for the MOS ensemble. When a Bayesian model combination calibration is used, the skill advantage for the evolutionary program ensemble relative to the MOS ensemble increases for root-mean-square error, but decreases for Brier skill score. Further improvement in root-mean-square error is obtained when the raw evolutionary program and MOS forecasts are pooled, and a new Bayesian model combination ensemble is produced. Future extensions to the method are discussed, including those capable of producing more complex forms, those involving 1000-fold increases in training populations, and adaptive methods.


Forests ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 1020
Author(s):  
Yanqi Dong ◽  
Guangpeng Fan ◽  
Zhiwu Zhou ◽  
Jincheng Liu ◽  
Yongguo Wang ◽  
...  

The quantitative structure model (QSM) contains the branch geometry and attributes of the tree. AdQSM is a new, accurate, and detailed tree QSM. In this paper, an automatic modeling method based on AdQSM is developed, and a low-cost technical scheme of tree structure modeling is provided, so that AdQSM can be freely used by more people. First, we used two digital cameras to collect two-dimensional (2D) photos of trees and generated three-dimensional (3D) point clouds of plot and segmented individual tree from the plot point clouds. Then a new QSM-AdQSM was used to construct tree model from point clouds of 44 trees. Finally, to verify the effectiveness of our method, the diameter at breast height (DBH), tree height, and trunk volume were derived from the reconstructed tree model. These parameters extracted from AdQSM were compared with the reference values from forest inventory. For the DBH, the relative bias (rBias), root mean square error (RMSE), and coefficient of variation of root mean square error (rRMSE) were 4.26%, 1.93 cm, and 6.60%. For the tree height, the rBias, RMSE, and rRMSE were—10.86%, 1.67 m, and 12.34%. The determination coefficient (R2) of DBH and tree height estimated by AdQSM and the reference value were 0.94 and 0.86. We used the trunk volume calculated by the allometric equation as a reference value to test the accuracy of AdQSM. The trunk volume was estimated based on AdQSM, and its bias was 0.07066 m3, rBias was 18.73%, RMSE was 0.12369 m3, rRMSE was 32.78%. To better evaluate the accuracy of QSM’s reconstruction of the trunk volume, we compared AdQSM and TreeQSM in the same dataset. The bias of the trunk volume estimated based on TreeQSM was −0.05071 m3, and the rBias was −13.44%, RMSE was 0.13267 m3, rRMSE was 35.16%. At 95% confidence interval level, the concordance correlation coefficient (CCC = 0.77) of the agreement between the estimated tree trunk volume of AdQSM and the reference value was greater than that of TreeQSM (CCC = 0.60). The significance of this research is as follows: (1) The automatic modeling method based on AdQSM is developed, which expands the application scope of AdQSM; (2) provide low-cost photogrammetric point cloud as the input data of AdQSM; (3) explore the potential of AdQSM to reconstruct forest terrestrial photogrammetric point clouds.


2013 ◽  
Vol 860-863 ◽  
pp. 2783-2786
Author(s):  
Yu Bing Dong ◽  
Hai Yan Wang ◽  
Ming Jing Li

Edge detection and thresholding segmentation algorithms are presented and tested with variety of grayscale images in different fields. In order to analyze and evaluate the quality of image segmentation, Root Mean Square Error is used. The smaller error value is, the better image segmentation effect is. The experimental results show that a segmentation method is not suitable for all images segmentation.


2013 ◽  
Vol 807-809 ◽  
pp. 1967-1971
Author(s):  
Yan Bai ◽  
Xiao Yan Duan ◽  
Hai Yan Gong ◽  
Cai Xia Xie ◽  
Zhi Hong Chen ◽  
...  

In this paper, the content of forsythoside A and ethanol-extract were rapidly determinated by near-infrared reflectance spectroscopy (NIRS). 85 samples of Forsythiae Fructus harvested in Luoyang from July to September in 2012 were divided into a calibration set (75 samples) and a validation set (10 samples). In combination with the partical least square (PLS), the quantitative calibration models of forsythoside A and ethanol-extract were established. The correlation coefficient of cross-validation (R2) was 0.98247 and 0.97214 for forsythoside A and ethanol-extract, the root-mean-square error of calibration (RMSEC) was 0.184 and 0.570, the root-mean-square error of cross-validation (RMSECV) was 0.81736 and 0.36656. The validation set were used to evaluate the performance of the models, the root-mean-square error of prediction (RMSEP) was 0.221 and 0.518. The results indicated that it was feasible to determine the content of forsythoside A and ethanol-extract in Forsythiae Fructus by near-infrared spectroscopy.


Food Research ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 248-253
Author(s):  
A.B. Riyanta ◽  
S. Riyanto ◽  
E. Lukitaningsih ◽  
A. Rohman

Soybean oil (SBO), sunflower oil (SFO) and grapeseed oil (GPO) contain high levels of unsaturated fats that are good for health and have proximity to candlenut oil. Candlenut oil (CNO) has a lower price and easier to get oil from that seeds than other seed oils, so it is used as adulteration for gains. Therefore, authentication is required to ensure the purity of oils by proper analysis. This research was aimed to highlight the FTIR spectroscopy application with multivariate calibration is a potential analysis for scanning the quaternary mixture of CNO, SBO, SFO and GPO. CNO quantification was performed using multivariate calibrations of principle component (PCR) regression and partial least (PLS) square to predict the model from the optimization FTIR spectra regions. The highest R2 and the lowest values of root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) were used as the basis for selection of multivariate calibrations created using several wavenumbers region of FTIR spectra. Wavenumbers regions of 4000-650 cm-1 from the second derivative FTIR-ATR spectra using PLS was used for quantitative analysis of CNO in quaternary mixture with SBO, SFO and GPO with R2 calibration = 0.9942 and 0.0239% for RMSEC value and 0.0495%. So, it can be concluded the use of FTIR spectra combination with PLS is accurate to detect quaternary mixtures of CNO, SBO, SFO and GPO with the highest R2 values and the lowest RMSEC and RMSEP values.


Sign in / Sign up

Export Citation Format

Share Document