scholarly journals Apple Leaf Disease Identification with a Small and Imbalanced Dataset Based on Lightweight Convolutional Networks

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 173
Author(s):  
Lili Li ◽  
Shujuan Zhang ◽  
Bin Wang

The intelligent identification and classification of plant diseases is an important research objective in agriculture. In this study, in order to realize the rapid and accurate identification of apple leaf disease, a new lightweight convolutional neural network RegNet was proposed. A series of comparative experiments had been conducted based on 2141 images of 5 apple leaf diseases (rust, scab, ring rot, panonychus ulmi, and healthy leaves) in the field environment. To assess the effectiveness of the RegNet model, a series of comparison experiments were conducted with state-of-the-art convolutional neural networks (CNN) such as ShuffleNet, EfficientNet-B0, MobileNetV3, and Vision Transformer. The results show that RegNet-Adam with a learning rate of 0.0001 obtained an average accuracy of 99.8% on the validation set and an overall accuracy of 99.23% on the test set, outperforming all other pre-trained models. In other words, the proposed method based on transfer learning established in this research can realize the rapid and accurate identification of apple leaf disease.

2019 ◽  
Vol 8 (4) ◽  
pp. 11485-11488

India is a developing country and agriculture has always played a major role in bolstering the country’s economic growth. Due to various factors like industrialization, mechanization and globalization, the green fields are facing complications. So, identifying the plant disease incorrectly will lead to a huge loss of both quantity and quality of the product and it will also incur loss in time and money. Hence, identifying the condition of the plant plays a major role for successful cultivation. Now a day’s image processing technique is being employed as a focal technique for diagnosing the various features of the crop. The image processing techniques can be used for identification of the plant disease and hence classify the plant disease. Generally, the symptoms of the disease are observed on leaves, stems, flowers etc. Here, the leaves of the affected plant are used for the identification and classification of the disease. Leaf image is captured using a smart phone as the first step and then they are processed to determine the condition of the plant. Identification of plant disease follows the steps like loading the image of the plant leaf, histogram equalization for enhancing contrast of the image, segmentation process by using Lab color space model, extracting features of the segmented image using GLCM (Grey Level Cooccurrence Matrix) and finally classification of leaf disease by using MCSVM (Multi Class Support Vector Machine).This procedure obtained an accuracy percentage of 83.6%.Also, it takes long training time for large datasets. To improve the accuracy of the detection and the classification of the plants, Convolutional Neural Network (CNN) is used. The main advantage of CNN is that it automatically detects the main features of the input without any supervision of human. In CNN identification of disease follow the steps like loading the image as the input image, convolution of the feature map and finally max pooling the layers to calculate the features of the image in detail. The plant diseases are classified with an accuracy of 93.8 %.


Blood ◽  
2009 ◽  
Vol 114 (24) ◽  
pp. 4957-4959 ◽  
Author(s):  
Julie A. Vrana ◽  
Jeffrey D. Gamez ◽  
Benjamin J. Madden ◽  
Jason D. Theis ◽  
H. Robert Bergen ◽  
...  

Abstract The clinical management of amyloidosis is based on the treatment of the underlying etiology, and accurate identification of the protein causing the amyloidosis is of paramount importance. Current methods used for typing of amyloidosis such as immunohistochemistry have low specificity and sensitivity. In this study, we report the development of a highly specific and sensitive novel test for the typing of amyloidosis in routine clinical biopsy specimens. Our approach combines specific sampling by laser microdissection (LMD) and analytical power of tandem mass spectrometry (MS)–based proteomic analysis. We studied 50 cases of amyloidosis that were well-characterized by gold standard clinicopathologic criteria (training set) and an independent validation set comprising 41 cases of cardiac amyloidosis. By use of LMD/MS, we identified the amyloid type with 100% specificity and sensitivity in the training set and with 98% in validation set. Use of the LMD/MS method will enhance our ability to type amyloidosis accurately in clinical biopsy specimens.


Agronomy ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1202
Author(s):  
Baohua Yang ◽  
Zhiwei Gao ◽  
Yuan Gao ◽  
Yue Zhu

The detection and counting of wheat ears are very important for crop field management, yield estimation, and phenotypic analysis. Previous studies have shown that most methods for detecting wheat ears were based on shallow features such as color and texture extracted by machine learning methods, which have obtained good results. However, due to the lack of robustness of these features, it was difficult for the above-mentioned methods to meet the detection and counting of wheat ears in natural scenes. Other studies have shown that convolutional neural network (CNN) methods could be used to achieve wheat ear detection and counting. However, the adhesion and occlusion of wheat ears limit the accuracy of detection. Therefore, to improve the accuracy of wheat ear detection and counting in the field, an improved YOLOv4 (you only look once v4) with CBAM (convolutional block attention module) including spatial and channel attention model was proposed that could enhance the feature extraction capabilities of the network by adding receptive field modules. In addition, to improve the generalization ability of the model, not only local wheat data (WD), but also two public data sets (WEDD and GWHDD) were used to construct the training set, the validation set, and the test set. The results showed that the model could effectively overcome the noise in the field environment and realize accurate detection and counting of wheat ears with different density distributions. The average accuracy of wheat ear detection was 94%, 96.04%, and 93.11%. Moreover, the wheat ears were counted on 60 wheat images. The results showed that R2 = 0.8968 for WD, 0.955 for WEDD, and 0.9884 for GWHDD. In short, the CBAM-YOLOv4 model could meet the actual requirements of wheat ear detection and counting, which provided technical support for other high-throughput parameters of the extraction of crops.


Molecules ◽  
2021 ◽  
Vol 26 (11) ◽  
pp. 3124
Author(s):  
Charles Farber ◽  
A. S. M. Faridul Islam ◽  
Endang M. Septiningsih ◽  
Michael J. Thomson ◽  
Dmitry Kurouski

Digital farming is a modern agricultural concept that aims to maximize the crop yield while simultaneously minimizing the environmental impact of farming. Successful implementation of digital farming requires development of sensors to detect and identify diseases and abiotic stresses in plants, as well as to probe the nutrient content of seeds and identify plant varieties. Experimental evidence of the suitability of Raman spectroscopy (RS) for confirmatory diagnostics of plant diseases was previously provided by our team and other research groups. In this study, we investigate the potential use of RS as a label-free, non-invasive and non-destructive analytical technique for the fast and accurate identification of nutrient components in the grains from 15 different rice genotypes. We demonstrate that spectroscopic analysis of intact rice seeds provides the accurate rice variety identification in ~86% of samples. These results suggest that RS can be used for fully automated, fast and accurate identification of seeds nutrient components.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


2017 ◽  
Vol 27 (08) ◽  
pp. 1750033 ◽  
Author(s):  
Alborz Rezazadeh Sereshkeh ◽  
Robert Trott ◽  
Aurélien Bricout ◽  
Tom Chau

Brain–computer interfaces (BCIs) for communication can be nonintuitive, often requiring the performance of hand motor imagery or some other conversation-irrelevant task. In this paper, electroencephalography (EEG) was used to develop two intuitive online BCIs based solely on covert speech. The goal of the first BCI was to differentiate between 10[Formula: see text]s of mental repetitions of the word “no” and an equivalent duration of unconstrained rest. The second BCI was designed to discern between 10[Formula: see text]s each of covert repetition of the words “yes” and “no”. Twelve participants used these two BCIs to answer yes or no questions. Each participant completed four sessions, comprising two offline training sessions and two online sessions, one for testing each of the BCIs. With a support vector machine and a combination of spectral and time-frequency features, an average accuracy of [Formula: see text] was reached across participants in the online classification of no versus rest, with 10 out of 12 participants surpassing the chance level (60.0% for [Formula: see text]). The online classification of yes versus no yielded an average accuracy of [Formula: see text], with eight participants exceeding the chance level. Task-specific changes in EEG beta and gamma power in language-related brain areas tended to provide discriminatory information. To our knowledge, this is the first report of online EEG classification of covert speech. Our findings support further study of covert speech as a BCI activation task, potentially leading to the development of more intuitive BCIs for communication.


Agronomy ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2388
Author(s):  
Sk Mahmudul Hassan ◽  
Michal Jasinski ◽  
Zbigniew Leonowicz ◽  
Elzbieta Jasinska ◽  
Arnab Kumar Maji

Various plant diseases are major threats to agriculture. For timely control of different plant diseases in effective manner, automated identification of diseases are highly beneficial. So far, different techniques have been used to identify the diseases in plants. Deep learning is among the most widely used techniques in recent times due to its impressive results. In this work, we have proposed two methods namely shallow VGG with RF and shallow VGG with Xgboost to identify the diseases. The proposed model is compared with other hand-crafted and deep learning-based approaches. The experiments are carried on three different plants namely corn, potato, and tomato. The considered diseases in corns are Blight, Common rust, and Gray leaf spot, diseases in potatoes are early blight and late blight, and tomato diseases are bacterial spot, early blight, and late blight. The result shows that our implemented shallow VGG with Xgboost model outperforms different deep learning models in terms of accuracy, precision, recall, f1-score, and specificity. Shallow Visual Geometric Group (VGG) with Xgboost gives the highest accuracy rate of 94.47% in corn, 98.74% in potato, and 93.91% in the tomato dataset. The models are also tested with field images of potato, corn, and tomato. Even in field image the average accuracy obtained using shallow VGG with Xgboost are 94.22%, 97.36%, and 93.14%, respectively.


Plants are prone to different diseases caused by multiple reasons like environmental conditions, light, bacteria, and fungus. These diseases always have some physical characteristics on the leaves, stems, and fruit, such as changes in natural appearance, spot, size, etc. Due to similar patterns, distinguishing and identifying category of plant disease is the most challenging task. Therefore, efficient and flawless mechanisms should be discovered earlier so that accurate identification and prevention can be performed to avoid several losses of the entire plant. Therefore, an automated identification system can be a key factor in preventing loss in the cultivation and maintaining high quality of agriculture products. This paper introduces modeling of rose plant leaf disease classification technique using feature extraction process and supervised learning mechanism. The outcome of the proposed study justifies the scope of the proposed system in terms of accuracy towards the classification of different kind of rose plant disease.


Most of the Indian economy rely on agriculture, so identifying any diseases crop in early stages is very crucial as these diseases in plants causes a large drop in the production and economy of the farmers and therefore, degradation of the crop which emphasize on the early detection of the plant disease. These days, detection of plant diseases has become a hot topic in the area of interest of the researchers. Farmers followed a traditional approach for identifying and detecting diseases in plants with naked eyes, which didn’t help much as the disease may have caused much damage to the plant. Tomato crop shares a huge portion of Indian cuisine and can be prone to various Air-Bourne and Soil-Bourne diseases. In this paper, we tried to automate the Tomato Plant Leaf disease detection by studying the various features of diseased and healthy leaves. The technique used is pattern recognition using Back-Propagation Neural network and comparing the results of this neural network on different features set. Several steps included are image acquisition, image pre-processing, features extraction, subset creation and BPNN classification.


Sign in / Sign up

Export Citation Format

Share Document