Rice Crop Yield Prediction Using Multi-Level Machine Learning Techniques

2020 ◽  
Vol 17 (9) ◽  
pp. 4280-4286
Author(s):  
G. L. Anoop ◽  
C. Nandini

Agriculture and allied production contributes to Indian economy and food security of India. Crop yield predictive model will help farmers and agriculture department and organization to take better decisions. In this paper we are proposingmulti-level, machine learning algorithms to predict rice crop yield. Here, data were collected from Indian Government website for 4 districts of Karnataka, i.e., Mysore, Mandya Raichur and Koppal, these data were publically available. In our proposed method initially, we have performed data pre-processing using z-score, normalization and Standardizing residuals on collected data, then multilevel decision tree and multilevel multiple linear regression methods are presented to predict the rice crop yield and evaluated the performance of both. The experimental results shows that the multiple linear regression is accurate than the decision tree technique. This prediction will guide the farmer to make better decision to gain better yield and for their livelihood in particular temperature or climatic scenario.

2019 ◽  
Vol 8 (9) ◽  
pp. 382 ◽  
Author(s):  
Marcos Ruiz-Álvarez ◽  
Francisco Alonso-Sarria ◽  
Francisco Gomariz-Castillo

Several methods have been tried to estimate air temperature using satellite imagery. In this paper, the results of two machine learning algorithms, Support Vector Machines and Random Forest, are compared with Multiple Linear Regression and Ordinary kriging. Several geographic, remote sensing and time variables are used as predictors. The validation is carried out using two different approaches, a leave-one-out cross validation in the spatial domain and a spatio-temporal k-block cross-validation, and four different statistics on a daily basis, allowing the use of ANOVA to compare the results. The main conclusion is that Random Forest produces the best results (R 2 = 0.888 ± 0.026, Root mean square error = 3.01 ± 0.325 using k-block cross-validation). Regression methods (Support Vector Machine, Random Forest and Multiple Linear Regression) are calibrated with MODIS data and several predictors easily calculated from a Digital Elevation Model. The most important variables in the Random Forest model were satellite temperature, potential irradiation and cdayt, a cosine transformation of the julian day.


Author(s):  
Yun Fan ◽  
Vladimir Krasnopolsky ◽  
Huug van den Dool ◽  
Chung-Yu Wu ◽  
Jon Gottschalck

AbstractForecast skill from dynamical forecast models decreases quickly with projection time due to various errors. Therefore, post-processing methods, from simple bias correction methods to more complicated multiple linear regression-based Model Output Statistics, are used to improve raw model forecasts. Usually, these methods show clear forecast improvement over the raw model forecasts, especially for short-range weather forecasts. However, linear approaches have limitations because the relationship between predictands and predictors may be nonlinear. This is even truer for extended range forecasts, such as Week 3-4 forecasts.In this study, neural network techniques are used to seek or model the relationships between a set of predictors and predictands, and eventually to improve Week 3-4 precipitation and 2-meter temperature forecasts made by the NOAA NCEP Climate Forecast System. Benefitting from advances in machine learning techniques in recent years, more flexible and capable machine learning algorithms and availability of big datasets enable us not only to explore nonlinear features or relationships within a given large dataset, but also to extract more sophisticated pattern relationships and co-variabilities hidden within the multi-dimensional predictors and predictands. Then these more sophisticated relationships and high-level statistical information are used to correct the model Week 3-4 precipitation and 2-meter temperature forecasts. The results show that to some extent neural network techniques can significantly improve the Week 3-4 forecast accuracy and greatly increase the efficiency over the traditional multiple linear regression methods.


Agriculture plays a significant role in the growth of the national economy. It relay on weather and other environmental aspects. Some of the factors on which agriculture is dependent are Soil, climate, flooding, fertilizers, temperature, precipitation, crops, insecticides and herb. The crop yield is dependent on these factors and hence difficult to predict. To know the status of crop production, in this work we perform descriptive study on agricultural data using various machine learning techniques. Crop yield estimates include estimating crop yields from available historical data such as precipitation data, soil data, and historic crop yields. This prediction will help farmers to predict crop yield before farming. Here we are utilizing three datasets like as clay dataset, precipitation dataset, and production dataset of Karnataka state, then we structure an assembled data sets and on this dataset we employ three different algorithms to get the genuine assessed yield and the precision of three different methods. K-Nearest Neighbor(KNN), Support Vector Machine(SVM), and Decision tree algorithms are applied on the training dataset and are tested with the test dataset, and the implementation of these algorithms is done using python programming and spyder tool. The performance comparison of algorithms is shown using mean absolute error, cross validation and accuracy and it is found that Decision tree is giving accuracy of 99% with very less mean square error(MSE). The proposed model can exhibit the precise expense of assessed crop yield and it is mark like as LOW, MID, and HIGH.


2019 ◽  
Author(s):  
Cheng-Sheng Yu ◽  
Yu-Jiun Lin ◽  
Chang-Hsien Lin ◽  
Sen-Te Wang ◽  
Shiyng-Yu Lin ◽  
...  

BACKGROUND Metabolic syndrome is a cluster of disorders that significantly influence the development and deterioration of numerous diseases. FibroScan is an ultrasound device that was recently shown to predict metabolic syndrome with moderate accuracy. However, previous research regarding prediction of metabolic syndrome in subjects examined with FibroScan has been mainly based on conventional statistical models. Alternatively, machine learning, whereby a computer algorithm learns from prior experience, has better predictive performance over conventional statistical modeling. OBJECTIVE We aimed to evaluate the accuracy of different decision tree machine learning algorithms to predict the state of metabolic syndrome in self-paid health examination subjects who were examined with FibroScan. METHODS Multivariate logistic regression was conducted for every known risk factor of metabolic syndrome. Principal components analysis was used to visualize the distribution of metabolic syndrome patients. We further applied various statistical machine learning techniques to visualize and investigate the pattern and relationship between metabolic syndrome and several risk variables. RESULTS Obesity, serum glutamic-oxalocetic transaminase, serum glutamic pyruvic transaminase, controlled attenuation parameter score, and glycated hemoglobin emerged as significant risk factors in multivariate logistic regression. The area under the receiver operating characteristic curve values for classification and regression trees and for the random forest were 0.831 and 0.904, respectively. CONCLUSIONS Machine learning technology facilitates the identification of metabolic syndrome in self-paid health examination subjects with high accuracy.


10.2196/17110 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e17110 ◽  
Author(s):  
Cheng-Sheng Yu ◽  
Yu-Jiun Lin ◽  
Chang-Hsien Lin ◽  
Sen-Te Wang ◽  
Shiyng-Yu Lin ◽  
...  

Background Metabolic syndrome is a cluster of disorders that significantly influence the development and deterioration of numerous diseases. FibroScan is an ultrasound device that was recently shown to predict metabolic syndrome with moderate accuracy. However, previous research regarding prediction of metabolic syndrome in subjects examined with FibroScan has been mainly based on conventional statistical models. Alternatively, machine learning, whereby a computer algorithm learns from prior experience, has better predictive performance over conventional statistical modeling. Objective We aimed to evaluate the accuracy of different decision tree machine learning algorithms to predict the state of metabolic syndrome in self-paid health examination subjects who were examined with FibroScan. Methods Multivariate logistic regression was conducted for every known risk factor of metabolic syndrome. Principal components analysis was used to visualize the distribution of metabolic syndrome patients. We further applied various statistical machine learning techniques to visualize and investigate the pattern and relationship between metabolic syndrome and several risk variables. Results Obesity, serum glutamic-oxalocetic transaminase, serum glutamic pyruvic transaminase, controlled attenuation parameter score, and glycated hemoglobin emerged as significant risk factors in multivariate logistic regression. The area under the receiver operating characteristic curve values for classification and regression trees and for the random forest were 0.831 and 0.904, respectively. Conclusions Machine learning technology facilitates the identification of metabolic syndrome in self-paid health examination subjects with high accuracy.


2019 ◽  
Vol 16 (4) ◽  
pp. 155-169
Author(s):  
N. A. Azeez ◽  
A. A. Ajayi

Since the invention of Information and Communication Technology (ICT), there has been a great shift from the erstwhile traditional approach of handling information across the globe to the usage of this innovation. The application of this initiative cut across almost all areas of human endeavours. ICT is widely utilized in education and production sectors as well as in various financial institutions. It is of note that many people are using it genuinely to carry out their day to day activities while others are using it to perform nefarious activities at the detriment of other cyber users. According to several reports which are discussed in the introductory part of this work, millions of people have become victims of fake Uniform Resource Locators (URLs) sent to their mails by spammers. Financial institutions are not left out in the monumental loss recorded through this illicit act over the years. It is worth mentioning that, despite several approaches currently in place, none could confidently be confirmed to provide the best and reliable solution. According to several research findings reported in the literature, researchers have demonstrated how machine learning algorithms could be employed to verify and confirm compromised and fake URLs in the cyberspace. Inconsistencies have however been noticed in the researchers’ findings and also their corresponding results are not dependable based on the values obtained and conclusions drawn from them. Against this backdrop, the authors carried out a comparative analysis of three learning algorithms (Naïve Bayes, Decision Tree and Logistics Regression Model) for verification of compromised, suspicious and fake URLs and determine which is the best of all based on the metrics (F-Measure, Precision and Recall) used for evaluation. Based on the confusion metrics measurement, the result obtained shows that the Decision Tree (ID3) algorithm achieves the highest values for recall, precision and f-measure. It unarguably provides efficient and credible means of maximizing the detection of compromised and malicious URLs. Finally, for future work, authors are of the opinion that two or more supervised learning algorithms can be hybridized to form a single effective and more efficient algorithm for fake URLs verification.Keywords: Learning-algorithms, Forged-URL, Phoney-URL, performance-comparison


2021 ◽  
Vol 931 (1) ◽  
pp. 012013
Author(s):  
Le Thi Nhut Suong ◽  
A V Bondarev ◽  
E V Kozlova

Abstract Geochemical studies of organic matter in source rocks play an important role in predicting the oil and gas accumulation of any territory, especially in oil and gas shale. For deep understanding, pyrolytic analyses are often carried out on samples before and after extraction of hydrocarbon with chloroform. However, extraction is a laborious and time-consuming process and the workload of laboratory equipment and time doubles. In this work, machine learning regression algorithms is applied for forecasting S2ex based on the pyrolytic analytic result of non-extracted samples. This study is carried out using more than 300 samples from 3 different wells in Bazhenov formation, Western Siberia. For developing a prediction model, 5 different machine learning regression algorithms including Multiple Linear Regression, Polynomial Regression, Support vector regression, Decision tree and Random forest have been tested and compared. The performance of these algorithms is examined by R-squared coefficient. The data of the X2 well was used for building a model. Simultaneously, this data is divided into 2 parts – 80% for training and 20% for checking. The model also was used for prediction of wells X1 and X3. Then, these predictive results were compared with the real results, which had been obtained from standard experiments. Despite limited amount of data, the result exceeded all expectations. The result of prediction also showcases that the relationship between before and after extraction parameters are complex and non-linear. The proof is R2 value of Multiple Linear Regression and Polynomial Regression is negative, which means the model is broken. However, Random forest and Decision tree give us a good performance. With the same algorithms, we can apply for prediction all geochemical parameters by depth or utilize them for well-logging data.


Author(s):  
Marcos Ruiz-Álvarez ◽  
Francisco Alonso-Sarría ◽  
Francisco Gomariz-Castillo

Several methods have been tried to estimate air temperature using satellite imagery. In this paper, the results of two machine learning algorithms, Support Vector Machine and Random Forest, are compared with Multivariate Linear Regression, TVX and Ordinary kriging. Several geographic, remote sensing and time variables are used as predictors. The validation is carried out using four different statistics on a daily basis allowing the use of ANOVA to compare the results. The main conclusion is that Random Forest with residual kriging produces the best results (R$^2$=0.612 $\pm$ 0.019, NSE=0.578 $\pm$ 0.025, RMSE=1.068 $\pm$ 0.027, PBIAS=-0.172 $\pm$ 0.046), whereas TVX produces the least accurate results. The environmental conditions in the study area are not really suited to TVX, moreover this method only takes into account satellite data. On the other hand, regression methods (Support Vector Machine, Random Forest and Multivariate Linear Regression) use several parameters that are easily calculated from a Digital Elevation Model, adding very little difficulty to the use of satellite data alone. The most important variables in the Random Forest Model were satellite temperature, potential irradiation and cdayt, a cosine transformation of the julian day.


2019 ◽  
Vol 11 (21) ◽  
pp. 2548
Author(s):  
Dong Luo ◽  
Douglas G. Goodin ◽  
Marcellus M. Caldas

Disasters are an unpredictable way to change land use and land cover. Improving the accuracy of mapping a disaster area at different time is an essential step to analyze the relationship between human activity and environment. The goals of this study were to test the performance of different processing procedures and examine the effect of adding normalized difference vegetation index (NDVI) as an additional classification feature for mapping land cover changes due to a disaster. Using Landsat ETM+ and OLI images of the Bento Rodrigues mine tailing disaster area, we created two datasets, one with six bands, and the other one with six bands plus the NDVI. We used support vector machine (SVM) and decision tree (DT) algorithms to build classifier models and validated models performance using 10-fold cross-validation, resulting in accuracies higher than 90%. The processed results indicated that the accuracy could reach or exceed 80%, and the support vector machine had a better performance than the decision tree. We also calculated each land cover type’s sensitivity (true positive rate) and found that Agriculture, Forest and Mine sites had higher values but Bareland and Water had lower values. Then, we visualized land cover maps in 2000 and 2017 and found out the Mine sites areas have been expanded about twice of the size, but Forest decreased 12.43%. Our findings showed that it is feasible to create a training data pool and use machine learning algorithms to classify a different year’s Landsat products and NDVI can improve the vegetation covered land classification. Furthermore, this approach can provide a venue to analyze land pattern change in a disaster area over time.


Sign in / Sign up

Export Citation Format

Share Document