scholarly journals Implementation of CNN for Plant Leaf Classification

2021 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Mohammad Diqi ◽  
Sri Hasta Mulyani

Many deep learning-based approaches for plant leaf stress identification have been proposed in the literature, but there are only a few partial efforts to summarize various contributions. This study aims to build a classification model to enable people or traditional medicine experts to detect medicinal plants by using a scanning camera. This Android-based application implements the Java programming language and labels using the Python programming language to build deep learning applications. The study aims to construct a deep learning model for image classification for plant leaves that can help people determine the types of medicinal plants based on android. This research can help the public recognize five types of medicinal plants, including spinach Duri, Javanese ginseng, Dadap Serep, and Moringa. In this study, the accuracy is 0.86, precision 0.22, f-1 score 0.23, while recall is 0.2375.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Dapeng Lang ◽  
Deyun Chen ◽  
Ran Shi ◽  
Yongjun He

Deep learning has been widely used in the field of image classification and image recognition and achieved positive practical results. However, in recent years, a number of studies have found that the accuracy of deep learning model based on classification greatly drops when making only subtle changes to the original examples, thus realizing the attack on the deep learning model. The main methods are as follows: adjust the pixels of attack examples invisible to human eyes and induce deep learning model to make the wrong classification; by adding an adversarial patch on the detection target, guide and deceive the classification model to make it misclassification. Therefore, these methods have strong randomness and are of very limited use in practical application. Different from the previous perturbation to traffic signs, our paper proposes a method that is able to successfully hide and misclassify vehicles in complex contexts. This method takes into account the complex real scenarios and can perturb with the pictures taken by a camera and mobile phone so that the detector based on deep learning model cannot detect the vehicle or misclassification. In order to improve the robustness, the position and size of the adversarial patch are adjusted according to different detection models by introducing the attachment mechanism. Through the test of different detectors, the patch generated in the single target detection algorithm can also attack other detectors and do well in transferability. Based on the experimental part of this paper, the proposed algorithm is able to significantly lower the accuracy of the detector. Affected by the real world, such as distance, light, angles, resolution, etc., the false classification of the target is realized by reducing the confidence level and background of the target, which greatly perturbs the detection results of the target detector. In COCO Dataset 2017, it reveals that the success rate of this algorithm reaches 88.7%.


2021 ◽  
Author(s):  
Gaurav Chachra ◽  
Qingkai Kong ◽  
Jim Huang ◽  
Srujay Korlakunta ◽  
Jennifer Grannen ◽  
...  

Abstract After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important locations on the images that facilitate the decision.


Author(s):  
Malusi Sibiya ◽  
Mbuyu Sumbwanyambe

Machine learning systems use different algorithms to detect the diseases affecting the plant leaves. Nevertheless, selecting a suitable machine learning framework differs from study to study, depending on the features and complexity of the software packages. This paper introduces a taxonomic inspection of the literature in deep learning frameworks for the detection of plant leaf diseases. The objective of this study is to identify the dominating software frameworks in the literature for modelling machine learning plant leaf disease detecting systems.


2020 ◽  
Vol 3 (2) ◽  
Author(s):  
Muhammad Darwis ◽  
Gatot Tri Pranoto ◽  
Yusuf Eka Wicaksana ◽  
Yaddarabullah Yaddarabullah

The social media time line, especially Twitter, is still interesting to follow. Various tweets delivered by the public are very informative and varied. This information should be able to be used further by utilizing the topic of conversation trends at one time. In this paper, the authors cluster the tweet data with the TF-IDF algorithm and the K-Mean method using the python programming language. The results of the tweet data clustering show predictions or possible topics of conversation that are being widely discussed by netizens. Finally, the data can be used to make decisions that utilize community sentiment towards an event through social media like Twitter.


2018 ◽  
Author(s):  
Yu Li ◽  
Zhongxiao Li ◽  
Lizhong Ding ◽  
Yuhui Hu ◽  
Wei Chen ◽  
...  

ABSTRACTMotivationIn most biological data sets, the amount of data is regularly growing and the number of classes is continuously increasing. To deal with the new data from the new classes, one approach is to train a classification model, e.g., a deep learning model, from scratch based on both old and new data. This approach is highly computationally costly and the extracted features are likely very different from the ones extracted by the model trained on the old data alone, which leads to poor model robustness. Another approach is to fine tune the trained model from the old data on the new data. However, this approach often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as the catastrophic forgetting problem. To our knowledge, this problem has not been studied in the field of bioinformatics despite its existence in many bioinformatic problems.ResultsHere we propose a novel method, SupportNet, to solve the catastrophic forgetting problem efficiently and effectively. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to ensure the robustness of the learned model. Comprehensive experiments on various tasks, including enzyme function prediction, subcellular structure classification and breast tumor classification, show that SupportNet drastically outperforms the state-of-the-art incremental learning methods and reaches similar performance as the deep learning model trained from scratch on both old and new data.AvailabilityOur program is accessible at: https://github.com/lykaust15/SupportNet.


2021 ◽  
Vol 61 ◽  
pp. 101182
Author(s):  
Ümit Atila ◽  
Murat Uçar ◽  
Kemal Akyol ◽  
Emine Uçar

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jeong-Hun Yoo ◽  
Han-Gyeol Yeom ◽  
WooSang Shin ◽  
Jong Pil Yun ◽  
Jong Hyun Lee ◽  
...  

AbstractThis paper proposes a convolutional neural network (CNN)-based deep learning model for predicting the difficulty of extracting a mandibular third molar using a panoramic radiographic image. The applied dataset includes a total of 1053 mandibular third molars from 600 preoperative panoramic radiographic images. The extraction difficulty was evaluated based on the consensus of three human observers using the Pederson difficulty score (PDS). The classification model used a ResNet-34 pretrained on the ImageNet dataset. The correlation between the PDS values determined by the proposed model and those measured by the experts was calculated. The prediction accuracies for C1 (depth), C2 (ramal relationship), and C3 (angulation) were 78.91%, 82.03%, and 90.23%, respectively. The results confirm that the proposed CNN-based deep learning model could be used to predict the difficulty of extracting a mandibular third molar using a panoramic radiographic image.


2021 ◽  
Vol 3 (3) ◽  
pp. 478-493
Author(s):  
Ahmed Abdelmoamen Ahmed ◽  
Gopireddy Harshavardhan Reddy

Plant diseases are one of the grand challenges that face the agriculture sector worldwide. In the United States, crop diseases cause losses of one-third of crop production annually. Despite the importance, crop disease diagnosis is challenging for limited-resources farmers if performed through optical observation of plant leaves’ symptoms. Therefore, there is an urgent need for markedly improved detection, monitoring, and prediction of crop diseases to reduce crop agriculture losses. Computer vision empowered with Machine Learning (ML) has tremendous promise for improving crop monitoring at scale in this context. This paper presents an ML-powered mobile-based system to automate the plant leaf disease diagnosis process. The developed system uses Convolutional Neural networks (CNN) as an underlying deep learning engine for classifying 38 disease categories. We collected an imagery dataset containing 96,206 images of plant leaves of healthy and infected plants for training, validating, and testing the CNN model. The user interface is developed as an Android mobile app, allowing farmers to capture a photo of the infected plant leaves. It then displays the disease category along with the confidence percentage. It is expected that this system would create a better opportunity for farmers to keep their crops healthy and eliminate the use of wrong fertilizers that could stress the plants. Finally, we evaluated our system using various performance metrics such as classification accuracy and processing time. We found that our model achieves an overall classification accuracy of 94% in recognizing the most common 38 disease classes in 14 crop species.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
C. Carlomagno ◽  
D. Bertazioli ◽  
A. Gualerzi ◽  
S. Picciolini ◽  
P. I. Banfi ◽  
...  

AbstractThe pandemic of COVID-19 is continuously spreading, becoming a worldwide emergency. Early and fast identification of subjects with a current or past infection must be achieved to slow down the epidemiological widening. Here we report a Raman-based approach for the analysis of saliva, able to significantly discriminate the signal of patients with a current infection by COVID-19 from healthy subjects and/or subjects with a past infection. Our results demonstrated the differences in saliva biochemical composition of the three experimental groups, with modifications grouped in specific attributable spectral regions. The Raman-based classification model was able to discriminate the signal collected from COVID-19 patients with accuracy, precision, sensitivity and specificity of more than 95%. In order to translate this discrimination from the signal-level to the patient-level, we developed a Deep Learning model obtaining accuracy in the range 89–92%. These findings have implications for the creation of a potential Raman-based diagnostic tool, using saliva as minimal invasive and highly informative biofluid, demonstrating the efficacy of the classification model.


2020 ◽  
Author(s):  
Ho Heon Kim ◽  
Jae Il An ◽  
Yu Rang Park

BACKGROUND Early detection of developmental disabilities in children is essential because early intervention can improve the prognosis of children due to rapid growth and neuroplasticity. Given the phenotypical nature of developmental disabilities, high variability may come from the assessment process. Because there is a growing body of evidence indicating a relationship between developmental disability and motor, motor skill is considered as a factor to facilitate early diagnosis of developmental disability. However, there are problems to capture their motor skill, such as lack of specialists and time constraints, in the diagnosis of developmental disorders, which is conducted through informal questions or surveys to their parents. OBJECTIVE This study aimed to 1) identify the possibility of drag-and-drop data as a digital biomarker and 2) develop a classification model based on drag-and-drop data to classify children with developmental disabilities. METHODS We collected the drag-and-drop data of children with normal and abnormal development from May 1, 2018, to May 1, 2020, in a mobile application (DoBrain). In this study, 223 normal development and 147 developmental disabled children were involved. We used touch coordinates and extracted kinetic variables from these coordinates. A deep learning algorithm was developed to predicted to classify children with development. For the interpretability of the model result, we identified which coordinates contribute the classification results by conducting the Grad-CAM. RESULTS Of the 370 children in the study, 223 had normal development, and 147 had developmental disabilities were included. In all games, the number of changes in the acceleration sign based on the direction of progress both in x, and y-axis showed significant differences between the two groups (p<0.001 and es>0.5, respectively). The deep learning convolutional neural network model showed that drag-and-drop data can help diagnose developmental disabilities with a sensitivity of 0.71 and specificity of 0.78. Grad class activation map, which can interpret the results of the deep learning model, was visualized with the game results of specific children. CONCLUSIONS Through the results of the deep learning model, it was confirmed that the drag-and-drop data can be a new digital biomarker for the diagnosis of developmental disabilities.


Sign in / Sign up

Export Citation Format

Share Document