scholarly journals How to Guarantee Food Safety via Grain Storage? An Approach to Improve Management Effectiveness by Machine Learning Algorithms

2021 ◽  
Vol 2 (8) ◽  
pp. 675-684
Author(s):  
Jin Wang ◽  
Youjun Jiang ◽  
Li Li ◽  
Chao Yang ◽  
Ke Li ◽  
...  

The purpose of grain storage management is to dynamically analyze the quality change of the reserved grains, adopt scientific and effective management methods to delay the speed of the quality deterioration, and reduce the loss rate during storage. At present, the supervision of the grain quality in the reserve mainly depends on the periodic measurements of the quality of the grains and the milled products. The data obtained by the above approach is accurate and reliable, but the workload is too large while the frequency is high. The obtained conclusions are also limited to the studied area and not applicable to be extended into other scenarios. Therefore, there is an urgent need of a general method that can quickly predict the quality of grains given different species, regions and storage periods based on historical data. In this study, we introduced Back-Propagation (BP) neural network algorithm and support vector machine algorithm into the quality prediction of the reserved grains. We used quality index, temperature and humidity data to build both an intertemporal prediction model and a synchronous prediction model. The results show that the BP neural network based on the storage characters from the first three periods can accurately predict the key storage characters intertemporally. The support vector machine can provide precise predictions of the key storage characters synchronously. The average predictive error for each of wheat, rice and corn is less than 15%, while the one for soybean is about 20%, all of which can meet the practical demands. In conclusion, the machine learning algorithms are helpful to improve the management effectiveness of grain storage.

Author(s):  
Sheela Rani P ◽  
Dhivya S ◽  
Dharshini Priya M ◽  
Dharmila Chowdary A

Machine learning is a new analysis discipline that uses knowledge to boost learning, optimizing the training method and developing the atmosphere within which learning happens. There square measure 2 sorts of machine learning approaches like supervised and unsupervised approach that square measure accustomed extract the knowledge that helps the decision-makers in future to require correct intervention. This paper introduces an issue that influences students' tutorial performance prediction model that uses a supervised variety of machine learning algorithms like support vector machine , KNN(k-nearest neighbors), Naïve Bayes and supplying regression and logistic regression. The results supported by various algorithms are compared and it is shown that the support vector machine and Naïve Bayes performs well by achieving improved accuracy as compared to other algorithms. The final prediction model during this paper may have fairly high prediction accuracy .The objective is not just to predict future performance of students but also provide the best technique for finding the most impactful features that influence student’s while studying.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2119
Author(s):  
Victor Flores ◽  
Claudio Leiva

The copper mining industry is increasingly using artificial intelligence methods to improve copper production processes. Recent studies reveal the use of algorithms, such as Artificial Neural Network, Support Vector Machine, and Random Forest, among others, to develop models for predicting product quality. Other studies compare the predictive models developed with these machine learning algorithms in the mining industry as a whole. However, not many copper mining studies published compare the results of machine learning techniques for copper recovery prediction. This study makes a detailed comparison between three models for predicting copper recovery by leaching, using four datasets resulting from mining operations in Northern Chile. The algorithms used for developing the models were Random Forest, Support Vector Machine, and Artificial Neural Network. To validate these models, four indicators or values of merit were used: accuracy (acc), precision (p), recall (r), and Matthew’s correlation coefficient (mcc). This paper describes the dataset preparation and the refinement of the threshold values used for the predictive variable most influential on the class (the copper recovery). Results show both a precision over 98.50% and also the model with the best behavior between the predicted and the real values. Finally, the obtained models have the following mean values: acc = 0.943, p = 88.47, r = 0.995, and mcc = 0.232. These values are highly competitive when compared with those obtained in similar studies using other approaches in the context.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0258788
Author(s):  
Sarra Ayouni ◽  
Fahima Hajjej ◽  
Mohamed Maddeh ◽  
Shaha Al-Otaibi

The educational research is increasingly emphasizing the potential of student engagement and its impact on performance, retention and persistence. This construct has emerged as an important paradigm in the higher education field for many decades. However, evaluating and predicting the student’s engagement level in an online environment remains a challenge. The purpose of this study is to suggest an intelligent predictive system that predicts the student’s engagement level and then provides the students with feedback to enhance their motivation and dedication. Three categories of students are defined depending on their engagement level (Not Engaged, Passively Engaged, and Actively Engaged). We applied three different machine-learning algorithms, namely Decision Tree, Support Vector Machine and Artificial Neural Network, to students’ activities recorded in Learning Management System reports. The results demonstrate that machine learning algorithms could predict the student’s engagement level. In addition, according to the performance metrics of the different algorithms, the Artificial Neural Network has a greater accuracy rate (85%) compared to the Support Vector Machine (80%) and Decision Tree (75%) classification techniques. Based on these results, the intelligent predictive system sends feedback to the students and alerts the instructor once a student’s engagement level decreases. The instructor can identify the students’ difficulties during the course and motivate them through e-mail reminders, course messages, or scheduling an online meeting.


2019 ◽  
Vol 18 (3) ◽  
pp. 742-766 ◽  
Author(s):  
Anna Kurtukova ◽  
Alexander Romanov

The paper is devoted to the analysis of the problem of determining the source code author , which is of interest to researchers in the field of information security, computer forensics, assessment of the quality of the educational process, protection of intellectual property. The paper presents a detailed analysis of modern solutions to the problem. The authors suggest two new identification techniques based on machine learning algorithms: support vector machine, fast correlation filter and informative features; the technique based on hybrid convolutional recurrent neural network. The experimental database includes samples of source codes written in Java, C ++, Python, PHP, JavaScript, C, C # and Ruby. The data was obtained using a web service for hosting IT-projects – Github. The total number of source codes exceeds 150 thousand samples. The average length of each of them is 850 characters. The case size is 542 authors. The experiments were conducted with source codes written in the most popular programming languages. Accuracy of the developed techniques for different numbers of authors was assessed using 10-fold cross-validation. An additional series of experiments was conducted with the number of authors from 2 to 50 for the most popular Java programming language. The graphs of the relationship between identification accuracy and case size are plotted. The analysis of result showed that the method based on hybrid neural network gives 97% accuracy, and it’s at the present time the best-known result. The technique based on the support vector machine made it possible to achieve 96% accuracy. The difference between the results of the hybrid neural network and the support vector machine was approximately 5%.


Sensor Review ◽  
2016 ◽  
Vol 36 (2) ◽  
pp. 207-216 ◽  
Author(s):  
Liyuan Xu ◽  
Jie He ◽  
Shihong Duan ◽  
Xibin Wu ◽  
Qin Wang

Purpose Sensor arrays and pattern recognition-based electronic nose (E-nose) is a typical detection and recognition instrument for indoor air quality (IAQ). The E-nose is able to monitor several pollutants in the air by mimicking the human olfactory system. Formaldehyde concentration prediction is one of the major functionalities of the E-nose, and three typical machine learning (ML) algorithms are most frequently used, including back propagation (BP) neural network, radial basis function (RBF) neural network and support vector regression (SVR). Design/methodology/approach This paper comparatively evaluates and analyzes those three ML algorithms under controllable environment, which is built on a marketable sensor arrays E-nose platform. Variable temperature (T), relative humidity (RH) and pollutant concentrations (C) conditions were measured during experiments to support the investigation. Findings Regression models have been built using the above-mentioned three typical algorithms, and in-depth analysis demonstrates that the model of the BP neural network results in a better prediction performance than others. Originality/value Finally, the empirical results prove that ML algorithms, combined with low-cost sensors, can make high-precision contaminant concentration detection indoor.


Author(s):  
Zhenxing Wu ◽  
Minfeng Zhu ◽  
Yu Kang ◽  
Elaine Lai-Han Leung ◽  
Tailong Lei ◽  
...  

Abstract Although a wide variety of machine learning (ML) algorithms have been utilized to learn quantitative structure–activity relationships (QSARs), there is no agreed single best algorithm for QSAR learning. Therefore, a comprehensive understanding of the performance characteristics of popular ML algorithms used in QSAR learning is highly desirable. In this study, five linear algorithms [linear function Gaussian process regression (linear-GPR), linear function support vector machine (linear-SVM), partial least squares regression (PLSR), multiple linear regression (MLR) and principal component regression (PCR)], three analogizers [radial basis function support vector machine (rbf-SVM), K-nearest neighbor (KNN) and radial basis function Gaussian process regression (rbf-GPR)], six symbolists [extreme gradient boosting (XGBoost), Cubist, random forest (RF), multiple adaptive regression splines (MARS), gradient boosting machine (GBM), and classification and regression tree (CART)] and two connectionists [principal component analysis artificial neural network (pca-ANN) and deep neural network (DNN)] were employed to learn the regression-based QSAR models for 14 public data sets comprising nine physicochemical properties and five toxicity endpoints. The results show that rbf-SVM, rbf-GPR, XGBoost and DNN generally illustrate better performances than the other algorithms. The overall performances of different algorithms can be ranked from the best to the worst as follows: rbf-SVM > XGBoost > rbf-GPR > Cubist > GBM > DNN > RF > pca-ANN > MARS > linear-GPR ≈ KNN > linear-SVM ≈ PLSR > CART ≈ PCR ≈ MLR. In terms of prediction accuracy and computational efficiency, SVM and XGBoost are recommended to the regression learning for small data sets, and XGBoost is an excellent choice for large data sets. We then investigated the performances of the ensemble models by integrating the predictions of multiple ML algorithms. The results illustrate that the ensembles of two or three algorithms in different categories can indeed improve the predictions of the best individual ML algorithms.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Hang Chen ◽  
Sulaiman Khan ◽  
Bo Kou ◽  
Shah Nazir ◽  
Wei Liu ◽  
...  

Generally, the emergence of Internet of Things enabled applications inspired the world during the last few years, providing state-of-the-art and novel-based solutions for different problems. This evolutionary field is mainly lead by wireless sensor network, radio frequency identification, and smart mobile technologies. Among others, the IoT plays a key role in the form of smart medical devices and wearables, with the ability to collect varied and longitudinal patient-generated health data, and at the same time also offering preliminary diagnosis options. In terms of efforts made for helping the patients using IoT-based solutions, experts exploit capabilities of the machine learning algorithms to provide efficient solutions in hemorrhage diagnosis. To reduce the death rates and propose accurate treatment, this paper presents a smart IoT-based application using machine learning algorithms for the human brain hemorrhage diagnosis. Based on the computerized tomography scan images for intracranial dataset, the support vector machine and feedforward neural network have been applied for the classification purposes. Overall, classification results of 80.67% and 86.7% are calculated for the support vector machine and feedforward neural network, respectively. It is concluded from the resultant analysis that the feedforward neural network outperforms in classifying intracranial images. The output generated from the classification tool gives information about the type of brain hemorrhage that ultimately helps in validating expert’s diagnosis and is treated as a learning tool for trainee radiologists to minimize the errors in the available systems.


2022 ◽  
Vol 9 (1) ◽  
pp. 1-12
Author(s):  
Sipu Hou ◽  
Zongzhen Cai ◽  
Jiming Wu ◽  
Hongwei Du ◽  
Peng Xie

It is not easy for banks to sell their term-deposit products to new clients because many factors will affect customers’ purchasing decision and because banks may have difficulties to identify their target customers. To address this issue, we use different supervised machine learning algorithms to predict if a customer will subscribe a bank term deposit and then compare the performance of these prediction models. Specifically, the current paper employs these five algorithms: Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Neural Network. This paper thus contributes to the artificial intelligence and Big Data field with an important evidence of the best performed model for predicting bank term deposit subscription.


2018 ◽  
Author(s):  
Nazmul Hossain ◽  
Fumihiko Yokota ◽  
Akira Fukuda ◽  
Ashir Ahmed

BACKGROUND Predictive analytics through machine learning has been extensively using across industries including eHealth and mHealth for analyzing patient’s health data, predicting diseases, enhancing the productivity of technology or devices used for providing healthcare services and so on. However, not enough studies were conducted to predict the usage of eHealth by rural patients in developing countries. OBJECTIVE The objective of this study is to predict rural patients’ use of eHealth through supervised machine learning algorithms and propose the best-fitted model after evaluating their performances in terms of predictive accuracy. METHODS Data were collected between June and July 2016 through a field survey with structured questionnaire form 292 randomly selected rural patients in a remote North-Western sub-district of Bangladesh. Four supervised machine learning algorithms namely logistic regression, boosted decision tree, support vector machine, and artificial neural network were chosen for this experiment. A ‘correlation-based feature selection’ technique was applied to include the most relevant but not redundant features into the model. A 10-fold cross-validation technique was applied to reduce bias and over-fitting of the data. RESULTS Logistic regression outperformed other three algorithms with 85.9% predictive accuracy, 86.4% precision, 90.5% recall, 88.1% F-score, and AUC of 91.5% followed by neural network, decision tree and support vector machine with the accuracy rate of 84.2%, 82.9 %, and 80.4% respectively. CONCLUSIONS The findings of this study are expected to be helpful for eHealth practitioners in selecting appropriate areas to serve and dealing with both under-capacity and over-capacity by predicting the patients’ response in advance with a certain level of accuracy and precision.


Author(s):  
Victor Flores ◽  
Claudio Leiva

The copper mining industry is increasingly using artificial intelligence methods to improve cop-per production processes. Recent studies reveal the use of algorithms such as Artificial Neural Network, Support Vector Machine, and Random Forest, among others, to develop models for predicting product quality. Other studies compare the predictive models developed with these machine learning algorithms in the mining industry, as a whole. However, not many copper mining studies published compare the results of machine learning techniques for copper recovery prediction. This study makes a detailed comparison between three models for predicting copper recovery by leaching, using four datasets resulting from mining operations in northern Chile. The algorithms used for developing the models were Random Forest, Support Vector Machine, and Artificial Neural Network. To validate these models, four indicators or values of merit were used: accuracy (acc), precision (p), recall (r), and Matthew’s correlation coefficient (mcc). This paper describes dataset preparation and the refinement of the threshold values used for the predictive variable most influential on the class (the copper recovery). Results show both a precision over 98.50% and also the model with the best behavior between the predicted and the real. Finally, the models obtained show the following mean values: acc=94.32, p=88.47, r=99.59, and mcc=2.31. These values are highly competitive as compared with those obtained in similar studies using other approaches in the context.


Sign in / Sign up

Export Citation Format

Share Document