scholarly journals Automatic Determination of Agricultural Plant Diseases

2021 ◽  
Vol 4 ◽  
pp. 23-28
Author(s):  
Andrii Afonin ◽  
Kyrylo Kundik

Machine learning technologies have developed rapidly in recent years, and people are now able to use them in various spheres of life, making their lives easier and better. The agro-industry is not lagging behind, and every year more and more problems in this area are solved with the help of machine learning algorithms. However, among the problems that have not yet been solved is the problem of identifying diseases of agricultural plants. According to the UN research, about 40% of the world’s harvest dies each year from various diseases, most of which could be avoided through timely intervention and treatment.To solve this problem, we offer an easy, accessible service for everyone, which will allow one to predict by the image of the plant leaves whether it is sick or healthy, or whether it needs any help or intrusion. This service will be indispensable for small farms engaged in growing crops. Thus, it will allow employees of such enterprises to immediately detect diseases and receive recommendations for the care of plants important to them.Therefore, it was decided to develop a neural network architecture that will solve this problem: the prediction of a plant disease by the image of its leaves. This neural network model is lightweight, does not take much time to learn, and has high accuracy on our dataset. It was also investigated which popular architectures (e.g. XceptionNet, DenseNet, etc.) of deep neural networks can have great accuracy in solving this problem. To realize the possibility of using the model by end users, i.e. farmers, it was decided to develop a special web service in the form of a telegram bot. With this bot, anyone can upload images of the leaves of agricultural plants and check whether this plant is healthy or free of any diseases. This bot is also trained to give appropriate advice to gardeners on the treatment of diseases or the proper cultivation of healthy plants.This solution fully solves the problem and has every chance to become an indispensable helper in preserving the world harvest.

The applications of a content-based image retrieval system in fields such as multimedia, security, medicine, and entertainment, have been implemented on a huge real-time database by using a convolutional neural network architecture. In general, thus far, content-based image retrieval systems have been implemented with machine learning algorithms. A machine learning algorithm is applicable to a limited database because of the few feature extraction hidden layers between the input and the output layers. The proposed convolutional neural network architecture was successfully implemented using 128 convolutional layers, pooling layers, rectifier linear unit (ReLu), and fully connected layers. A convolutional neural network architecture yields better results of its ability to extract features from an image. The Euclidean distance metric is used for calculating the similarity between the query image and the database images. It is implemented using the COREL database. The proposed system is successfully evaluated using precision, recall, and F-score. The performance of the proposed method is evaluated using the precision and recall.


Author(s):  
А.И. Паршин ◽  
М.Н. Аралов ◽  
В.Ф. Барабанов ◽  
Н.И. Гребенникова

Задача распознавания изображений - одна из самых сложных в машинном обучении, требующая от исследователя как глубоких знаний, так и больших временных и вычислительных ресурсов. В случае использования нелинейных и сложных данных применяются различные архитектуры глубоких нейронных сетей, но при этом сложным вопросом остается проблема выбора нейронной сети. Основными архитектурами, используемыми повсеместно, являются свёрточные нейронные сети (CNN), рекуррентные нейронные сети (RNN), глубокие нейронные сети (DNN). На основе рекуррентных нейронных сетей (RNN) были разработаны сети с долгой краткосрочной памятью (LSTM) и сети с управляемыми реккурентными блоками (GRU). Каждая архитектура нейронной сети имеет свою структуру, свои настраиваемые и обучаемые параметры, обладает своими достоинствами и недостатками. Комбинируя различные виды нейронных сетей, можно существенно улучшить качество предсказания в различных задачах машинного обучения. Учитывая, что выбор оптимальной архитектуры сети и ее параметров является крайне трудной задачей, рассматривается один из методов построения архитектуры нейронных сетей на основе комбинации свёрточных, рекуррентных и глубоких нейронных сетей. Показано, что такие архитектуры превосходят классические алгоритмы машинного обучения The image recognition task is one of the most difficult in machine learning, requiring both deep knowledge and large time and computational resources from the researcher. In the case of using nonlinear and complex data, various architectures of deep neural networks are used but the problem of choosing a neural network remains a difficult issue. The main architectures used everywhere are convolutional neural networks (CNN), recurrent neural networks (RNN), deep neural networks (DNN). Based on recurrent neural networks (RNNs), Long Short Term Memory Networks (LSTMs) and Controlled Recurrent Unit Networks (GRUs) were developed. Each neural network architecture has its own structure, customizable and trainable parameters, and advantages and disadvantages. By combining different types of neural networks, you can significantly improve the quality of prediction in various machine learning problems. Considering that the choice of the optimal network architecture and its parameters is an extremely difficult task, one of the methods for constructing the architecture of neural networks based on a combination of convolutional, recurrent and deep neural networks is considered. We showed that such architectures are superior to classical machine learning algorithms


2021 ◽  
Author(s):  
Max Schneckenburger ◽  
Sven Höfler ◽  
Luis Garcia ◽  
Rui Almeida ◽  
Rainer Börret

Abstract Robot polishing is increasingly being used in the production of high-end glass workpieces such as astronomy mirrors, lithography lenses, laser gyroscopes or high-precision coordinate measuring machines. The quality of optical components such as lenses or mirrors can be described by shape errors and surface roughness. Whilst the trend towards sub nanometre level surfaces finishes and features progresses, matching both form and finish coherently in complex parts remains a major challenge. With increasing optic sizes, the stability of the polishing process becomes more and more important. If not empirically known, the optical surface must be measured after each polishing step. One approach is to mount sensors on the polishing head in order to measure process relevant quantities. On the basis of these data, Machine Learning algorithms can be applied for surface value prediction. Due to the modification of the polishing head by the installation of sensors and the resulting process influences, the first Machine Learning model could only make removal predictions with insufficient accuracy. The aim of this work is to show a polishing head optimised for the sensors, which is coupled with a Machine Learning model in order to predict the material removal and failure of the polishing head during robot polishing. The artificial neural network (ANN) is developed in the Python programming language using the Keras deep learning library. It starts with a simple network architecture and common training parameters. The model will then be optimized step-by-step using different methods and optimized in different steps. The data collected by a design of experiments with the sensor-integrated glass polishing head are used to train the machine learning model and to validate the results. The neural network achieves a prediction accuracy of the material removal of 99.22 %.


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 445-451
Author(s):  
Yifei Sun ◽  
Navid Rashedi ◽  
Vikrant Vaze ◽  
Parikshit Shah ◽  
Ryan Halter ◽  
...  

ABSTRACT Introduction Early prediction of the acute hypotensive episode (AHE) in critically ill patients has the potential to improve outcomes. In this study, we apply different machine learning algorithms to the MIMIC III Physionet dataset, containing more than 60,000 real-world intensive care unit records, to test commonly used machine learning technologies and compare their performances. Materials and Methods Five classification methods including K-nearest neighbor, logistic regression, support vector machine, random forest, and a deep learning method called long short-term memory are applied to predict an AHE 30 minutes in advance. An analysis comparing model performance when including versus excluding invasive features was conducted. To further study the pattern of the underlying mean arterial pressure (MAP), we apply a regression method to predict the continuous MAP values using linear regression over the next 60 minutes. Results Support vector machine yields the best performance in terms of recall (84%). Including the invasive features in the classification improves the performance significantly with both recall and precision increasing by more than 20 percentage points. We were able to predict the MAP with a root mean square error (a frequently used measure of the differences between the predicted values and the observed values) of 10 mmHg 60 minutes in the future. After converting continuous MAP predictions into AHE binary predictions, we achieve a 91% recall and 68% precision. In addition to predicting AHE, the MAP predictions provide clinically useful information regarding the timing and severity of the AHE occurrence. Conclusion We were able to predict AHE with precision and recall above 80% 30 minutes in advance with the large real-world dataset. The prediction of regression model can provide a more fine-grained, interpretable signal to practitioners. Model performance is improved by the inclusion of invasive features in predicting AHE, when compared to predicting the AHE based on only the available, restricted set of noninvasive technologies. This demonstrates the importance of exploring more noninvasive technologies for AHE prediction.


2021 ◽  
Vol 11 (4) ◽  
pp. 1829
Author(s):  
Davide Grande ◽  
Catherine A. Harris ◽  
Giles Thomas ◽  
Enrico Anderlini

Recurrent Neural Networks (RNNs) are increasingly being used for model identification, forecasting and control. When identifying physical models with unknown mathematical knowledge of the system, Nonlinear AutoRegressive models with eXogenous inputs (NARX) or Nonlinear AutoRegressive Moving-Average models with eXogenous inputs (NARMAX) methods are typically used. In the context of data-driven control, machine learning algorithms are proven to have comparable performances to advanced control techniques, but lack the properties of the traditional stability theory. This paper illustrates a method to prove a posteriori the stability of a generic neural network, showing its application to the state-of-the-art RNN architecture. The presented method relies on identifying the poles associated with the network designed starting from the input/output data. Providing a framework to guarantee the stability of any neural network architecture combined with the generalisability properties and applicability to different fields can significantly broaden their use in dynamic systems modelling and control.


2020 ◽  
pp. 1-12
Author(s):  
Cao Yanli

The research on the risk pricing of Internet finance online loans not only enriches the theory and methods of online loan pricing, but also helps to improve the level of online loan risk pricing. In order to improve the efficiency of Internet financial supervision, this article builds an Internet financial supervision system based on machine learning algorithms and improved neural network algorithms. Moreover, on the basis of factor analysis and discretization of loan data, this paper selects the relatively mature Logistic regression model to evaluate the credit risk of the borrower and considers the comprehensive management of credit risk and the matching with income. In addition, according to the relevant provisions of the New Basel Agreement on expected losses and economic capital, starting from the relevant factors, this article combines the credit risk assessment results to obtain relevant factors through regional research and conduct empirical analysis. The research results show that the model constructed in this paper has certain reliability.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


2017 ◽  
Vol 71 (1) ◽  
pp. 169-188 ◽  
Author(s):  
E. Shafiee ◽  
M. R. Mosavi ◽  
M. Moazedi

The importance of the Global Positioning System (GPS) and related electronic systems continues to increase in a range of environmental, engineering and navigation applications. However, civilian GPS signals are vulnerable to Radio Frequency (RF) interference. Spoofing is an intentional intervention that aims to force a GPS receiver to acquire and track invalid navigation data. Analysis of spoofing and authentic signal patterns represents the differences as phase, energy and imaginary components of the signal. In this paper, early-late phase, delta, and signal level as the three main features are extracted from the correlation output of the tracking loop. Using these features, spoofing detection can be performed by exploiting conventional machine learning algorithms such as K-Nearest Neighbourhood (KNN) and naive Bayesian classifier. A Neural Network (NN) as a learning machine is a modern computational method for collecting the required knowledge and predicting the output values in complicated systems. This paper presents a new approach for GPS spoofing detection based on multi-layer NN whose inputs are indices of features. Simulation results on a software GPS receiver showed adequate detection accuracy was obtained from NN with a short detection time.


2021 ◽  
Vol 118 (40) ◽  
pp. e2026053118
Author(s):  
Miles Cranmer ◽  
Daniel Tamayo ◽  
Hanno Rein ◽  
Peter Battaglia ◽  
Samuel Hadden ◽  
...  

We introduce a Bayesian neural network model that can accurately predict not only if, but also when a compact planetary system with three or more planets will go unstable. Our model, trained directly from short N-body time series of raw orbital elements, is more than two orders of magnitude more accurate at predicting instability times than analytical estimators, while also reducing the bias of existing machine learning algorithms by nearly a factor of three. Despite being trained on compact resonant and near-resonant three-planet configurations, the model demonstrates robust generalization to both nonresonant and higher multiplicity configurations, in the latter case outperforming models fit to that specific set of integrations. The model computes instability estimates up to 105 times faster than a numerical integrator, and unlike previous efforts provides confidence intervals on its predictions. Our inference model is publicly available in the SPOCK (https://github.com/dtamayo/spock) package, with training code open sourced (https://github.com/MilesCranmer/bnn_chaos_model).


2020 ◽  
Author(s):  
Florian Dupuy ◽  
Olivier Mestre ◽  
Léo Pfitzner

<p>Cloud cover is a crucial information for many applications such as planning land observation missions from space. However, cloud cover remains a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In our application, the ground truth is a gridded cloud cover product derived from satellite observations over Europe, and predictors are spatial fields of various variables produced by ARPEGE (Météo-France global NWP) at the corresponding lead time.</p><p>In this study, ARPEGE cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows to integrate spatial information contained in NWP outputs. We show that a simple U-Net architecture produces significant improvements over Europe. Compared to the raw ARPEGE forecasts, MAE drops from 25.1 % to 17.8 % and RMSE decreases from 37.0 % to 31.6 %. Considering specific needs for earth observation, special interest was put on forecasts with low cloud cover conditions (< 10 %). For this particular nebulosity class, we show that hit rate jumps from 40.6 to 70.7 (which is the order of magnitude of what can be achieved using classical machine learning algorithms such as random forests) while false alarm decreases from 38.2 to 29.9. This is an excellent result, since improving hit rates by means of random forests usually also results in a slight increase of false alarms.</p>


Sign in / Sign up

Export Citation Format

Share Document