scholarly journals Use of Images of Leaves and Fruits of Apple Trees for Automatic Identification of Symptoms of Diseases and Nutritional Disorders

Author(s):  
Lucas Garcia Nachtigall ◽  
Ricardo Matsumura Araujo ◽  
Gilmar Ribeiro Nachtigall

Rapid diagnosis of symptoms caused by pest attack, diseases and nutritional or physiological disorders in apple orchards is essential to avoid greater losses. This paper aimed to evaluate the efficiency of Convolutional Neural Networks (CNN) to automatically detect and classify symptoms of diseases, nutritional deficiencies and damage caused by herbicides in apple trees from images of their leaves and fruits. A novel data set was developed containing labeled examples consisting of approximately 10,000 images of leaves and apple fruits divided into 12 classes, which were classified by algorithms of machine learning, with emphasis on models of deep learning. The results showed trained CNNs can overcome the performance of experts and other algorithms of machine learning in the classification of symptoms in apple trees from leaves images, with an accuracy of 97.3% and obtain 91.1% accuracy with fruit images. In this way, the use of Convolutional Neural Networks may enable the diagnosis of symptoms in apple trees in a fast, precise and usual way.

2020 ◽  
pp. 1072-1086
Author(s):  
Lucas Garcia Nachtigall ◽  
Ricardo Matsumura Araujo ◽  
Gilmar Ribeiro Nachtigall

Rapid diagnosis of symptoms caused by pest attack, diseases and nutritional or physiological disorders in apple orchards is essential to avoid greater losses. This paper aimed to evaluate the efficiency of Convolutional Neural Networks (CNN) to automatically detect and classify symptoms of diseases, nutritional deficiencies and damage caused by herbicides in apple trees from images of their leaves and fruits. A novel data set was developed containing labeled examples consisting of approximately 10,000 images of leaves and apple fruits divided into 12 classes, which were classified by algorithms of machine learning, with emphasis on models of deep learning. The results showed trained CNNs can overcome the performance of experts and other algorithms of machine learning in the classification of symptoms in apple trees from leaves images, with an accuracy of 97.3% and obtain 91.1% accuracy with fruit images. In this way, the use of Convolutional Neural Networks may enable the diagnosis of symptoms in apple trees in a fast, precise and usual way.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Patrick Beyersdorffer ◽  
Wolfgang Kunert ◽  
Kai Jansen ◽  
Johanna Miller ◽  
Peter Wilhelm ◽  
...  

Abstract Uncontrolled movements of laparoscopic instruments can lead to inadvertent injury of adjacent structures. The risk becomes evident when the dissecting instrument is located outside the field of view of the laparoscopic camera. Technical solutions to ensure patient safety are appreciated. The present work evaluated the feasibility of an automated binary classification of laparoscopic image data using Convolutional Neural Networks (CNN) to determine whether the dissecting instrument is located within the laparoscopic image section. A unique record of images was generated from six laparoscopic cholecystectomies in a surgical training environment to configure and train the CNN. By using a temporary version of the neural network, the annotation of the training image files could be automated and accelerated. A combination of oversampling and selective data augmentation was used to enlarge the fully labeled image data set and prevent loss of accuracy due to imbalanced class volumes. Subsequently the same approach was applied to the comprehensive, fully annotated Cholec80 database. The described process led to the generation of extensive and balanced training image data sets. The performance of the CNN-based binary classifiers was evaluated on separate test records from both databases. On our recorded data, an accuracy of 0.88 with regard to the safety-relevant classification was achieved. The subsequent evaluation on the Cholec80 data set yielded an accuracy of 0.84. The presented results demonstrate the feasibility of a binary classification of laparoscopic image data for the detection of adverse events in a surgical training environment using a specifically configured CNN architecture.


2021 ◽  
Vol 12 ◽  
pp. 878-901
Author(s):  
Ido Azuri ◽  
Irit Rosenhek-Goldian ◽  
Neta Regev-Rudzki ◽  
Georg Fantner ◽  
Sidney R Cohen

Progress in computing capabilities has enhanced science in many ways. In recent years, various branches of machine learning have been the key facilitators in forging new paths, ranging from categorizing big data to instrumental control, from materials design through image analysis. Deep learning has the ability to identify abstract characteristics embedded within a data set, subsequently using that association to categorize, identify, and isolate subsets of the data. Scanning probe microscopy measures multimodal surface properties, combining morphology with electronic, mechanical, and other characteristics. In this review, we focus on a subset of deep learning algorithms, that is, convolutional neural networks, and how it is transforming the acquisition and analysis of scanning probe data.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 384 ◽  
Author(s):  
M V.D. Prasad ◽  
B JwalaLakshmamma ◽  
A Hari Chandana ◽  
K Komali ◽  
M V.N. Manoja ◽  
...  

Machine learning is penetrating most of the classification and recognition tasks performed by a computer. This paper proposes the classification of flower images using a powerful artificial intelligence tool, convolutional neural networks (CNN). A flower image database with 9500 images is considered for the experimentation. The entire database is sub categorized into 4. The CNN training is initiated in five batches and the testing is carried out on all the for datasets. Different CNN architectures were designed and tested with our flower image data to obtain better accuracy in recognition. Various pooling schemes were implemented to improve the classification rates. We achieved 97.78% recognition rate compared to other classifier models reported on the same dataset.


2020 ◽  
Vol 10 (15) ◽  
pp. 5186
Author(s):  
Paweł Tarasiuk ◽  
Arkadiusz Tomczyk ◽  
Bartłomiej Stasiak

Image analysis has many practical applications and proper representation of image content is its crucial element. In this work, a novel type of representation is proposed where an image is reduced to a set of highly sparse matrices. Equivalently, it can be viewed as a set of local features of different types, as precise coordinates of detected keypoints are given. Additionally, every keypoint has a value expressing feature intensity at a given location. These features are extracted from a dedicated convolutional neural network autoencoder. This kind of representation has many advantages. First of all, local features are not manually designed but are automatically trained for a given class of images. Second, as they are trained in a network that restores its input on the output, they may be expected to minimize information loss. Consequently, they can be used to solve similar tasks replacing original images; such an ability was illustrated with image classification task. Third, the generated features, although automatically synthesized, are relatively easy to interpret. Taking a decoder part of our network, one can easily generate a visual building block connected with a specific feature. As the proposed method is entirely new, a detailed analysis of its properties for a relatively simple data set was conducted and is described in this work. Moreover, to present the quality of trained features, it is compared with results of convolutional neural networks having a similar working principle (sparse coding).


Author(s):  
R. Niessner ◽  
H. Schilling ◽  
B. Jutzi

In recent years, there has been a significant improvement in the detection, identification and classification of objects and images using Convolutional Neural Networks. To study the potential of the Convolutional Neural Network, in this paper three approaches are investigated to train classifiers based on Convolutional Neural Networks. These approaches allow Convolutional Neural Networks to be trained on datasets containing only a few hundred training samples, which results in a successful classification. Two of these approaches are based on the concept of transfer learning. In the first approach features, created by a pretrained Convolutional Neural Network, are used for a classification using a support vector machine. In the second approach a pretrained Convolutional Neural Network gets fine-tuned on a different data set. The third approach includes the design and training for flat Convolutional Neural Networks from the scratch. The evaluation of the proposed approaches is based on a data set provided by the IEEE Geoscience and Remote Sensing Society (GRSS) which contains RGB and LiDAR data of an urban area. In this work it is shown that these Convolutional Neural Networks lead to classification results with high accuracy both on RGB and LiDAR data. Features which are derived by RGB data transferred into LiDAR data by transfer learning lead to better results in classification in contrast to RGB data. Using a neural network which contains fewer layers than common neural networks leads to the best classification results. In this framework, it can furthermore be shown that the practical application of LiDAR images results in a better data basis for classification of vehicles than the use of RGB images.


Landslides can easily be tragic to human life and property. Increase in the rate of human settlement in the mountains has resulted in safety concerns. Landslides have caused economic loss between 1-2% of the GDP in many developing countries. In this study, we discuss a deep learning approach to detect landslides. Convolutional Neural Networks are used for feature extraction for our proposed model. As there was no source of an exact and precise data set for feature extraction, therefore, a new data set was built for testing the model. We have tested and compared this work with our proposed model and with other machine-learning algorithms such as Logistic Regression, Random Forest, AdaBoost, K-Nearest Neighbors and Support Vector Machine. Our proposed deep learning model produces a classification accuracy of 96.90% outperforming the classical machine-learning algorithms.


Author(s):  
Supun Nakandala ◽  
Marta M. Jankowska ◽  
Fatima Tuz-Zahra ◽  
John Bellettiere ◽  
Jordan A. Carlson ◽  
...  

Background: Machine learning has been used for classification of physical behavior bouts from hip-worn accelerometers; however, this research has been limited due to the challenges of directly observing and coding human behavior “in the wild.” Deep learning algorithms, such as convolutional neural networks (CNNs), may offer better representation of data than other machine learning algorithms without the need for engineered features and may be better suited to dealing with free-living data. The purpose of this study was to develop a modeling pipeline for evaluation of a CNN model on a free-living data set and compare CNN inputs and results with the commonly used machine learning random forest and logistic regression algorithms. Method: Twenty-eight free-living women wore an ActiGraph GT3X+ accelerometer on their right hip for 7 days. A concurrently worn thigh-mounted activPAL device captured ground truth activity labels. The authors evaluated logistic regression, random forest, and CNN models for classifying sitting, standing, and stepping bouts. The authors also assessed the benefit of performing feature engineering for this task. Results: The CNN classifier performed best (average balanced accuracy for bout classification of sitting, standing, and stepping was 84%) compared with the other methods (56% for logistic regression and 76% for random forest), even without performing any feature engineering. Conclusion: Using the recent advancements in deep neural networks, the authors showed that a CNN model can outperform other methods even without feature engineering. This has important implications for both the model’s ability to deal with the complexity of free-living data and its potential transferability to new populations.


2020 ◽  
Vol 496 (4) ◽  
pp. 4141-4153
Author(s):  
Matej Kosiba ◽  
Maggie Lieu ◽  
Bruno Altieri ◽  
Nicolas Clerc ◽  
Lorenzo Faccioli ◽  
...  

ABSTRACT Galaxy clusters appear as extended sources in XMM–Newton images, but not all extended sources are clusters. So, their proper classification requires visual inspection with optical images, which is a slow process with biases that are almost impossible to model. We tackle this problem with a novel approach, using convolutional neural networks (CNNs), a state-of-the-art image classification tool, for automatic classification of galaxy cluster candidates. We train the networks on combined XMM–Newton X-ray observations with their optical counterparts from the all-sky Digitized Sky Survey. Our data set originates from the XMM CLuster Archive Super Survey (X-CLASS) survey sample of galaxy cluster candidates, selected by a specially developed pipeline, the XAmin, tailored for extended source detection and characterization. Our data set contains 1707 galaxy cluster candidates classified by experts. Additionally, we create an official Zooniverse citizen science project, The Hunt for Galaxy Clusters, to probe whether citizen volunteers could help in a challenging task of galaxy cluster visual confirmation. The project contained 1600 galaxy cluster candidates in total of which 404 overlap with the expert’s sample. The networks were trained on expert and Zooniverse data separately. The CNN test sample contains 85 spectroscopically confirmed clusters and 85 non-clusters that appear in both data sets. Our custom network achieved the best performance in the binary classification of clusters and non-clusters, acquiring accuracy of 90 per cent, averaged after 10 runs. The results of using CNNs on combined X-ray and optical data for galaxy cluster candidate classification are encouraging, and there is a lot of potential for future usage and improvements.


Sign in / Sign up

Export Citation Format

Share Document