scholarly journals Real-time plant health assessment via implementing cloud-based scalable transfer learning on AWS DeepLens

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243243
Author(s):  
Asim Khan ◽  
Umair Nawaz ◽  
Anwaar Ulhaq ◽  
Randall W. Robinson

The control of plant leaf diseases is crucial as it affects the quality and production of plant species with an effect on the economy of any country. Automated identification and classification of plant leaf diseases is, therefore, essential for the reduction of economic losses and the conservation of specific species. Various Machine Learning (ML) models have previously been proposed to detect and identify plant leaf disease; however, they lack usability due to hardware sophistication, limited scalability and realistic use inefficiency. By implementing automatic detection and classification of leaf diseases in fruit trees (apple, grape, peach and strawberry) and vegetable plants (potato and tomato) through scalable transfer learning on Amazon Web Services (AWS) SageMaker and importing it into AWS DeepLens for real-time functional usability, our proposed DeepLens Classification and Detection Model (DCDM) addresses such limitations. Scalability and ubiquitous access to our approach is provided by cloud integration. Our experiments on an extensive image data set of healthy and unhealthy fruit trees and vegetable plant leaves showed 98.78% accuracy with a real-time diagnosis of diseases of plant leaves. To train DCDM deep learning model, we used forty thousand images and then evaluated it on ten thousand images. It takes an average of 0.349s to test an image for disease diagnosis and classification using AWS DeepLens, providing the consumer with disease information in less than a second.

Author(s):  
Asim Khan ◽  
Umair Nawaz ◽  
Anwaar Ulhaq ◽  
Randall W. Robinson

In the Agriculture sector, control of plant leaf diseases is crucial as it influences the quality and production of plant species with an impact on the economy of any country. Therefore, automated identification and classification of plant leaf disease at an early stage is essential to reduce economic loss and to conserve the specific species. Previously, to detect and classify plant leaf disease, various Machine Learning models have been proposed; however, they lack usability due to hardware incompatibility, limited scalability and inefficiency in practical usage. Our proposed DeepLens Classification and Detection Model (DCDM) approach deal with such limitations by introducing automated detection and classification of the leaf diseases in fruits (apple, grapes, peach and strawberry) and vegetables (potato and tomato) via scalable transfer learning on A.W.S. SageMaker and importing it on A.W.S. DeepLens for real-time practical usability. Cloud integration provides scalability and ubiquitous access to our approach. Our experiments on extensive image data set of healthy and unhealthy leaves of fruits and vegetables showed an accuracy of 98.78% with a real-time diagnosis of plant leaves diseases. We used forty thousand images for the training of deep learning model and then evaluated it on ten thousand images. The process of testing an image for disease diagnosis and classification using A.W.S. DeepLens on average took 0.349s, providing disease information to the user in less than a second.


Author(s):  
Asim Khan ◽  
Umair Nawaz ◽  
Anwaar Ulhaq ◽  
Randall W. Robinson

In the Agriculture sector, control of plant leaf diseases is crucial as it influences the quality and production of plant species with an impact on the economy of any country. Therefore, automated identification and classification of plant leaf disease at an early stage is essential to reduce economic loss and to conserve the specific species. Previously, to detect and classify plant leaf disease, various Machine Learning models have been proposed; however, they lack usability due to hardware incompatibility, limited scalability and inefficiency in practical usage. Our proposed DeepLens Classification and Detection Model (D.C.D.M.) approach deal with such limitations by introducing automated detection and classification of the leaf diseases in fruits (apple, grapes, peach and strawberry) and vegetables (potato and tomato) via scalable transfer learning on A.W.S. SageMaker and importing it on A.W.S. DeepLens for real-time practical usability. Cloud integration provides scalability and ubiquitous access to our approach. Our experiments on extensive image data set of healthy and unhealthy leaves of fruits and vegetables showed an accuracy of 98.78% with a real-time diagnosis of plant leaves diseases. We used forty thousand images for the training of deep learning model and then evaluated it on ten thousand images. The process of testing an image for disease diagnosis and classification using A.W.S. DeepLens on average took 0.349s, providing disease information to the user in less than a second.


Author(s):  
Asim Khan ◽  
Umair Nawaz ◽  
Anwaar Ulhaq ◽  
Randall W. Robinson

In the Agriculture sector, control of plant leaf diseases is crucial as it influences the quality and production of plant species with an impact on the economy of any country. Therefore, automated identification and classification of plant leaf disease at an early stage is essential to reduce economic loss and to conserve the specific species. Previously, to detect and classify plant leaf disease, various Machine Learning models have been proposed; however, they lack usability due to hardware incompatibility, limited scalability and inefficiency in practical usage. Our proposed DeepLens Classification and Detection Model (DCDM) approach deal with such limitations by introducing automated detection and classification of the leaf diseases in fruits (apple, grapes, peach and strawberry) and vegetables (potato and tomato) via scalable transfer learning on A.W.S. SageMaker and importing it on AWS DeepLens for real-time practical usability. Cloud integration provides scalability and ubiquitous access to our approach. Our experiments on extensive image data set of healthy and unhealthy leaves of fruits and vegetables showed an accuracy of 98.78% with a real-time diagnosis of plant leaves diseases. We used forty thousand images for the training of deep learning model and then evaluated it on ten thousand images. The process of testing an image for disease diagnosis and classification using AWS DeepLens on average took 0.349s, providing disease information to the user in less than a second.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


Author(s):  
Aditya Rajbongshi ◽  
Thaharim Khan ◽  
Md. Mahbubur Rahman ◽  
Anik Pramanik ◽  
Shah Md Tanvir Siddiquee ◽  
...  

<p>The acknowledgment of plant diseases assumes an indispensable part in taking infectious prevention measures to improve the quality and amount of harvest yield. Mechanization of plant diseases is a lot advantageous as it decreases the checking work in an enormous cultivated area where mango is planted to a huge extend. Leaves being the food hotspot for plants, the early and precise recognition of leaf diseases is significant. This work focused on grouping and distinguishing the diseases of mango leaves through the process of CNN. DenseNet201, InceptionResNetV2, InceptionV3, ResNet50, ResNet152V2, and Xception all these models of CNN with transfer learning techniques are used here for getting better accuracy from the targeted data set. Image acquisition, image segmentation, and features extraction are the steps involved in disease detection. Different kinds of leaf diseases which are considered as the class for this work such as anthracnose, gall machi, powdery mildew, red rust are used in the dataset consisting of 1500 images of diseased and also healthy mango leaves image data another class is also added in the dataset. We have also evaluated the overall performance matrices and found that the DenseNet201 outperforms by obtaining the highest accuracy as 98.00% than other models.</p>


2020 ◽  
Vol 10 (7) ◽  
pp. 901-914
Author(s):  
D. Indumathy ◽  
S. Sudha

Cardiac arrest in human arises owing to blood vessel diseases or heart defects. Blood vessel diseases result due to the blockage of blood in the heart vessels, which leads to pain in the heart. Heart defects occur because of damage in the cardiac muscles indicated by abnormal heart rhythms. Cardiovascular diseases cause mortality which could be avoided through the earlier detection of cardiovascular diseases. The major cause for cardiovascular diseases is cholesterol deposition inside the artery walls which later forms plaques that block the blood flow. Until now, plaques have been detected through medical imaging only after the heart attack. The plaques are blasted through angioplasty or reduced with medicine. Classification of the plaques before treatment, leads to effective medication based on the type of plaque. The sub classification of the plaque types such as rupture-prone plaque, ruptured plaque with sub occlusive thrombus, erosion-prone plaque, calcified nodule and non-plaque has been segmented and identified. In this paper, we propose a novel Spatial Fuzzy Propensity Score Matching (SFPSM) method to classify the plaques. The SFPSM method consists of clustering, ranking the cluster and region-based pixel wise analysis. Pixel analysis inspects specific regions of sub pixel points and calibrates the plaque. From the experimental results, the classification of plaque based on the 50-image data set has exhibited accuracy of 85% after validation. The plaque accuracy of classification provides the standard digital number values for the sub classification of plaques.


Author(s):  
GOZDE UNAL ◽  
GAURAV SHARMA ◽  
REINER ESCHBACH

Photography, lithography, xerography, and inkjet printing are the dominant technologies for color printing. Images produced on these "different media" are often scanned either for the purpose of copying or creating an electronic representation. For an improved color calibration during scanning, a media identification from the scanned image data is desirable. In this paper, we propose an efficient algorithm for automated classification of input media into four major classes corresponding to photographic, lithographic, xerographic and inkjet. Our technique exploits the strong correlation between the type of input media and the spatial statistics of corresponding images, which are observed in the scanned images. We adopt ideas from spatial statistics literature, and design two spatial statistical measures of dispersion and periodicity, which are computed over spatial point patterns generated from blocks of the scanned image, and whose distributions provide the features for making a decision. We utilize extensive training data and determined well separated decision regions to classify the input media. We validate and tested our classification technique results over an independent extensive data set. The results demonstrate that the proposed method is able to distinguish between the different media with high reliability.


2017 ◽  
Vol 37 (6) ◽  
pp. 549-554
Author(s):  
Fabiana Q. Mayer ◽  
Emily M. dos Reis ◽  
André Vinícius A. Bezerra ◽  
Rogério O. Rodrigues ◽  
Thais Michel ◽  
...  

ABSTRACT: Bovine tuberculosis (bTB) is a zoonosis causing economic losses and public health risks in many countries. The disease diagnosis in live animals is performed by intradermal tuberculin test, which is based on delayed hypersensitivity reactions. As tuberculosis has complex immune response, this test has limitations in sensitivity and specificity. This study sought to test an alternative approach for in vivo diagnosis of bovine tuberculosis, based on real-time polymerase chain reaction (PCR). DNA samples, extracted from nasal swabs of live cows, were used for SYBR® Green real-time PCR, which is able to differentiate between Mycobacterium tuberculosis and Mycobacterium avium complexes. Statistical analysis was performed to compare the results of tuberculin test, the in vivo gold standard bTB diagnosis method, with real-time PCR, thereby determining the specificity and sensitivity of molecular method. Cervical comparative test (CCT) was performed in 238 animals, of which 193 had suitable DNA from nasal swabs for molecular analysis, as indicated by amplification of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) gene, and were included in the study. In total, 25 (10.5%) of the animals were CCT reactive, of which none was positive in the molecular test. Of the 168 CCT negative animals, four were positive for M. tuberculosis complex at real time PCR from nasal swabs. The comparison of these results generated values of sensitivity and specificity of 0% and 97.6%, respectively; moreover, low coefficients of agreement and correlation (-0.029 and -0.049, respectively) between the results obtained with both tests were also observed. This study showed that real-time PCR from nasal swabs is not suitable for in vivo diagnosis of bovine tuberculosis; thus tuberculin skin test is still the best option for this purpose.


2020 ◽  
Author(s):  
Erdi Acar ◽  
İhsan Yilmaz

AbstractDiagnose the infected patient as soon as possible in the coronavirus 2019 (COVID-19) outbreak which is declared as a pandemic by the world health organization (WHO) is extremely important. Experts recommend CT imaging as a diagnostic tool because of the weak points of the nucleic acid amplification test (NAAT). In this study, the detection of COVID-19 from CT images, which give the most accurate response in a short time, was investigated in the classical computer and firstly in quantum computers. Using the quantum transfer learning method, we experimentally perform COVID-19 detection in different quantum real processors (IBMQx2, IBMQ-London and IBMQ-Rome) of IBM, as well as in different simulators (Pennylane, Qiskit-Aer and Cirq). By using a small number of data sets such as 126 COVID-19 and 100 Normal CT images, we obtained a positive or negative classification of COVID-19 with 90% success in classical computers, while we achieved a high success rate of 94-100% in quantum computers. Also, according to the results obtained, machine learning process in classical computers requiring more processors and time than quantum computers can be realized in a very short time with a very small quantum processor such as 4 qubits in quantum computers. If the size of the data set is small; Due to the superior properties of quantum, it is seen that according to the classification of COVID-19 and Normal, in terms of machine learning, quantum computers seem to outperform traditional computers.


Sign in / Sign up

Export Citation Format

Share Document