scholarly journals Using Artificial Neural Networks to Improve CFS Week 3-4 Precipitation and 2-Meter Air Temperature Forecasts

Author(s):  
Yun Fan ◽  
Vladimir Krasnopolsky ◽  
Huug van den Dool ◽  
Chung-Yu Wu ◽  
Jon Gottschalck

AbstractForecast skill from dynamical forecast models decreases quickly with projection time due to various errors. Therefore, post-processing methods, from simple bias correction methods to more complicated multiple linear regression-based Model Output Statistics, are used to improve raw model forecasts. Usually, these methods show clear forecast improvement over the raw model forecasts, especially for short-range weather forecasts. However, linear approaches have limitations because the relationship between predictands and predictors may be nonlinear. This is even truer for extended range forecasts, such as Week 3-4 forecasts.In this study, neural network techniques are used to seek or model the relationships between a set of predictors and predictands, and eventually to improve Week 3-4 precipitation and 2-meter temperature forecasts made by the NOAA NCEP Climate Forecast System. Benefitting from advances in machine learning techniques in recent years, more flexible and capable machine learning algorithms and availability of big datasets enable us not only to explore nonlinear features or relationships within a given large dataset, but also to extract more sophisticated pattern relationships and co-variabilities hidden within the multi-dimensional predictors and predictands. Then these more sophisticated relationships and high-level statistical information are used to correct the model Week 3-4 precipitation and 2-meter temperature forecasts. The results show that to some extent neural network techniques can significantly improve the Week 3-4 forecast accuracy and greatly increase the efficiency over the traditional multiple linear regression methods.

2019 ◽  
Vol 8 (9) ◽  
pp. 382 ◽  
Author(s):  
Marcos Ruiz-Álvarez ◽  
Francisco Alonso-Sarria ◽  
Francisco Gomariz-Castillo

Several methods have been tried to estimate air temperature using satellite imagery. In this paper, the results of two machine learning algorithms, Support Vector Machines and Random Forest, are compared with Multiple Linear Regression and Ordinary kriging. Several geographic, remote sensing and time variables are used as predictors. The validation is carried out using two different approaches, a leave-one-out cross validation in the spatial domain and a spatio-temporal k-block cross-validation, and four different statistics on a daily basis, allowing the use of ANOVA to compare the results. The main conclusion is that Random Forest produces the best results (R 2 = 0.888 ± 0.026, Root mean square error = 3.01 ± 0.325 using k-block cross-validation). Regression methods (Support Vector Machine, Random Forest and Multiple Linear Regression) are calibrated with MODIS data and several predictors easily calculated from a Digital Elevation Model. The most important variables in the Random Forest model were satellite temperature, potential irradiation and cdayt, a cosine transformation of the julian day.


2020 ◽  
Vol 17 (9) ◽  
pp. 4280-4286
Author(s):  
G. L. Anoop ◽  
C. Nandini

Agriculture and allied production contributes to Indian economy and food security of India. Crop yield predictive model will help farmers and agriculture department and organization to take better decisions. In this paper we are proposingmulti-level, machine learning algorithms to predict rice crop yield. Here, data were collected from Indian Government website for 4 districts of Karnataka, i.e., Mysore, Mandya Raichur and Koppal, these data were publically available. In our proposed method initially, we have performed data pre-processing using z-score, normalization and Standardizing residuals on collected data, then multilevel decision tree and multilevel multiple linear regression methods are presented to predict the rice crop yield and evaluated the performance of both. The experimental results shows that the multiple linear regression is accurate than the decision tree technique. This prediction will guide the farmer to make better decision to gain better yield and for their livelihood in particular temperature or climatic scenario.


Author(s):  
Marcos Ruiz-Álvarez ◽  
Francisco Alonso-Sarría ◽  
Francisco Gomariz-Castillo

Several methods have been tried to estimate air temperature using satellite imagery. In this paper, the results of two machine learning algorithms, Support Vector Machine and Random Forest, are compared with Multivariate Linear Regression, TVX and Ordinary kriging. Several geographic, remote sensing and time variables are used as predictors. The validation is carried out using four different statistics on a daily basis allowing the use of ANOVA to compare the results. The main conclusion is that Random Forest with residual kriging produces the best results (R$^2$=0.612 $\pm$ 0.019, NSE=0.578 $\pm$ 0.025, RMSE=1.068 $\pm$ 0.027, PBIAS=-0.172 $\pm$ 0.046), whereas TVX produces the least accurate results. The environmental conditions in the study area are not really suited to TVX, moreover this method only takes into account satellite data. On the other hand, regression methods (Support Vector Machine, Random Forest and Multivariate Linear Regression) use several parameters that are easily calculated from a Digital Elevation Model, adding very little difficulty to the use of satellite data alone. The most important variables in the Random Forest Model were satellite temperature, potential irradiation and cdayt, a cosine transformation of the julian day.


Author(s):  
James A. Tallman ◽  
Michal Osusky ◽  
Nick Magina ◽  
Evan Sewall

Abstract This paper provides an assessment of three different machine learning techniques for accurately reproducing a distributed temperature prediction of a high-pressure turbine airfoil. A three-dimensional Finite Element Analysis thermal model of a cooled turbine airfoil was solved repeatedly (200 instances) for various operating point settings of the corresponding gas turbine engine. The response surface created by the repeated solutions was fed into three machine learning algorithms and surrogate model representations of the FEA model’s response were generated. The machine learning algorithms investigated were a Gaussian Process, a Boosted Decision Tree, and an Artificial Neural Network. Additionally, a simple Linear Regression surrogate model was created for comparative purposes. The Artificial Neural Network model proved to be the most successful at reproducing the FEA model over the range of operating points. The mean and standard deviation differences between the FEA and the Neural Network models were 15% and 14% of a desired accuracy threshold, respectively. The Digital Thread for Design (DT4D) was used to expedite all model execution and machine learning training. A description of DT4D is also provided.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maulin Raval ◽  
Pavithra Sivashanmugam ◽  
Vu Pham ◽  
Hardik Gohel ◽  
Ajeet Kaushik ◽  
...  

AbstractAustralia faces a dryness disaster whose impact may be mitigated by rainfall prediction. Being an incredibly challenging task, yet accurate prediction of rainfall plays an enormous role in policy making, decision making and organizing sustainable water resource systems. The ability to accurately predict rainfall patterns empowers civilizations. Though short-term rainfall predictions are provided by meteorological systems, long-term prediction of rainfall is challenging and has a lot of factors that lead to uncertainty. Historically, various researchers have experimented with several machine learning techniques in rainfall prediction with given weather conditions. However, in places like Australia where the climate is variable, finding the best method to model the complex rainfall process is a major challenge. The aim of this paper is to: (a) predict rainfall using machine learning algorithms and comparing the performance of different models. (b) Develop an optimized neural network and develop a prediction model using the neural network (c) to do a comparative study of new and existing prediction techniques using Australian rainfall data. In this paper, rainfall data collected over a span of ten years from 2007 to 2017, with the input from 26 geographically diverse locations have been used to develop the predictive models. The data was divided into training and testing sets for validation purposes. The results show that both traditional and neural network-based machine learning models can predict rainfall with more precision.


2021 ◽  
Author(s):  
Hrvoje Kalinić ◽  
Zvonimir Bilokapić ◽  
Frano Matić

<p>In certain measurement endeavours spatial resolution of the data is restricted, while in others data have poor temporal resolution. Typical example of these scenarios come from geoscience where measurement stations are fixed and scattered sparsely in space which results in poor spatial resolution of acquired data. Thus, we ask if it is possible to use a portion of data as a proxy to estimate the rest of the data using different machine learning techniques. In this study, four supervised machine learning methods are trained on the wind data from the Adriatic Sea and used to reconstruct the missing data. The vector wind data components at 10m height are taken from ERA5 reanalysis model in range from 1981 to 2017 and sampled every 6 hours. Data taken from the northern part of the Adriatic Sea was used to estimate the wind at the southern part of Adriatic. The machine learning models utilized for this task were linear regression, K-nearest neighbours, decision trees and a neural network. As a measure of quality of reconstruction the difference between the true and estimated values of wind data in the southern part of Adriatic was used. The result shows that all four models reconstruct the data few hundred kilometres away with average amplitude error below 1m/s. Linear regression, K-nearest neighbours, decision trees and a neural network show average amplitude reconstruction error of 0.52, 0.91, 0.76 and 0.73, and standard deviation of 1.00, 1.42, 1.23 and 1.17, respectively. This work has been supported by Croatian Science Foundation under the project UIP-2019-04-1737.</p>


Author(s):  
Neha Sharma ◽  
Harsh Vardhan Bhandari ◽  
Narendra Singh Yadav ◽  
Harsh Vardhan Jonathan Shroff

Nowadays it is imperative to maintain a high level of security to ensure secure communication of information between various institutions and organizations. With the growing use of internet over the years, the number of attacks over the internet have escalated. A powerful Intrusion Detection System (IDS) is required to ensure the security of a network. The aim of an IDS is to monitor the active processes in a network and to detect any deviation from the normal behavior of the system. When it comes to machine learning, optimization is the process of obtaining the maximum accuracy from a model. Optimization is vital for IDSs in order to predict a wide variety of attacks with utmost accuracy. The effectiveness of an IDS is dependent on its ability to correctly predict and classify any anomaly faced by a computer system. During the last two decades, KDD_CUP_99 has been the most widely used data set to evaluate the performance of such systems. In this study, we will apply different Machine Learning techniques on this data set and see which technique yields the best results.


2012 ◽  
pp. 1652-1686
Author(s):  
Réal Carbonneau ◽  
Rustam Vahidov ◽  
Kevin Laframboise

Managing supply chains in today’s complex, dynamic, and uncertain environment is one of the key challenges affecting the success of the businesses. One of the crucial determinants of effective supply chain management is the ability to recognize customer demand patterns and react accordingly to the changes in face of intense competition. Thus the ability to adequately predict demand by the participants in a supply chain is vital to the survival of businesses. Demand prediction is aggravated by the fact that communication patterns between participants that emerge in a supply chain tend to distort the original consumer’s demand and create high levels of noise. Distortion and noise negatively impact forecast quality of the participants. This work investigates the applicability of machine learning (ML) techniques and compares their performances with the more traditional methods in order to improve demand forecast accuracy in supply chains. To this end we used two data sets from particular companies (chocolate manufacturer and toner cartridge manufacturer), as well as data from the Statistics Canada manufacturing survey. A representative set of traditional and ML-based forecasting techniques have been applied to the demand data and the accuracy of the methods was compared. As a group, Machine Learning techniques outperformed traditional techniques in terms of overall average, but not in terms of overall ranking. We also found that a support vector machine (SVM) trained on multiple demand series produced the most accurate forecasts.


2021 ◽  
Vol 8 ◽  
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

Gaze gestures are extensively used in the interactions with agents/computers/robots. Either remote eye tracking devices or head-mounted devices (HMDs) have the advantage of hands-free during the interaction. Previous studies have demonstrated the success of applying machine learning techniques for gaze gesture recognition. More recently, graph neural networks (GNNs) have shown great potential applications in several research areas such as image classification, action recognition, and text classification. However, GNNs are less applied in eye tracking researches. In this work, we propose a graph convolutional network (GCN)–based model for gaze gesture recognition. We train and evaluate the GCN model on the HideMyGaze! dataset. The results show that the accuracy, precision, and recall of the GCN model are 97.62%, 97.18%, and 98.46%, respectively, which are higher than the other compared conventional machine learning algorithms, the artificial neural network (ANN) and the convolutional neural network (CNN).


Sign in / Sign up

Export Citation Format

Share Document