scholarly journals An Improved Deep Learning Model for Plant Disease Detection

2020 ◽  
Vol 8 (6) ◽  
pp. 5389-5392

In current era, Deep Convolution Neural Networks (DCNNs) are desperately improved localization, identification and detection of objects. Recent days, Big data is evolved which leads huge data generation through modern tools like surveillance video cameras. In this paper, we have focused on plant data images in agricultural field. Agriculture is one of major living source in India. To increase the yield by preventing diseases and detection of diseases place major role in agriculture domain. By using Improved and customized DCNN model (improved-detect), We trained plantdoc and plant village datasets. Mainly we used Tomato, Corn and potato plant for model training and testing. we have experimented on plant image data set-tomato leaves both healthy and diseased ones. Experimental results are compared with state of the architectures like Mobile Net, Dark Net-19, ResNet-101and proposed model out PERFORMS in location and detection of plant diseases. obtains best results in computation and accuracy. In the below results sections, we have presented the results with suitable models.

Author(s):  
Aditya Rajbongshi ◽  
Thaharim Khan ◽  
Md. Mahbubur Rahman ◽  
Anik Pramanik ◽  
Shah Md Tanvir Siddiquee ◽  
...  

<p>The acknowledgment of plant diseases assumes an indispensable part in taking infectious prevention measures to improve the quality and amount of harvest yield. Mechanization of plant diseases is a lot advantageous as it decreases the checking work in an enormous cultivated area where mango is planted to a huge extend. Leaves being the food hotspot for plants, the early and precise recognition of leaf diseases is significant. This work focused on grouping and distinguishing the diseases of mango leaves through the process of CNN. DenseNet201, InceptionResNetV2, InceptionV3, ResNet50, ResNet152V2, and Xception all these models of CNN with transfer learning techniques are used here for getting better accuracy from the targeted data set. Image acquisition, image segmentation, and features extraction are the steps involved in disease detection. Different kinds of leaf diseases which are considered as the class for this work such as anthracnose, gall machi, powdery mildew, red rust are used in the dataset consisting of 1500 images of diseased and also healthy mango leaves image data another class is also added in the dataset. We have also evaluated the overall performance matrices and found that the DenseNet201 outperforms by obtaining the highest accuracy as 98.00% than other models.</p>


2019 ◽  
Vol 109 (6) ◽  
pp. 1083-1087 ◽  
Author(s):  
Dor Oppenheim ◽  
Guy Shani ◽  
Orly Erlich ◽  
Leah Tsror

Many plant diseases have distinct visual symptoms, which can be used to identify and classify them correctly. This article presents a potato disease classification algorithm that leverages these distinct appearances and advances in computer vision made possible by deep learning. The algorithm uses a deep convolutional neural network, training it to classify the tubers into five classes: namely, four disease classes and a healthy potato class. The database of images used in this study, containing potato tubers of different cultivars, sizes, and diseases, was acquired, classified, and labeled manually by experts. The models were trained over different train-test splits to better understand the amount of image data needed to apply deep learning for such classification tasks. The models were tested over a data set of images taken using standard low-cost RGB (red, green, and blue) sensors and were tagged by experts, demonstrating high classification accuracy. This is the first article to report the successful implementation of deep convolutional networks, popular in object identification, to the task of disease identification in potato tubers, showing the potential of deep learning techniques in agricultural tasks.


2020 ◽  
Vol 5 (2) ◽  
pp. 105-118
Author(s):  
Saluky Saluky

Computer Vision is an important and challenging area of ​​research in image processing applied to analytical video. Image data comes from CCTV surveillance which is spread in public places owned by the government, private sector and the public. Supervision is carried out to monitor anomalies in the surrounding environment such as abandoned objects, crowds, theft and others. An abandoned object is one of the anomalies that is important to monitor because it can be categorized as a danger and can also prevent theft of the object left behind, therefore automatic monitoring is needed to prevent adverse events from occurring. In the last decade, a number of publications have been presented in the field of intelligent visual surveillance to detect abandoned objects (AOD). In this paper, we present a state-of-the-art showing the overall progress of the detection of objects that were abandoned or removed from surveillance video in recent years. We include a brief introduction to the detection of abandoned objects with their problems and challenges. The aim of this paper is to provide a review of the literature in the field of recognition of abandoned objects of visual surveillance systems with a general framework for researchers in this field.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2020 ◽  
Vol 33 (6) ◽  
pp. 838-844
Author(s):  
Jan-Helge Klingler ◽  
Ulrich Hubbe ◽  
Christoph Scholz ◽  
Florian Volz ◽  
Marc Hohenhaus ◽  
...  

OBJECTIVEIntraoperative 3D imaging and navigation is increasingly used for minimally invasive spine surgery. A novel, noninvasive patient tracker that is adhered as a mask on the skin for 3D navigation necessitates a larger intraoperative 3D image set for appropriate referencing. This enlarged 3D image data set can be acquired by a state-of-the-art 3D C-arm device that is equipped with a large flat-panel detector. However, the presumably associated higher radiation exposure to the patient has essentially not yet been investigated and is therefore the objective of this study.METHODSPatients were retrospectively included if a thoracolumbar 3D scan was performed intraoperatively between 2016 and 2019 using a 3D C-arm with a large 30 × 30–cm flat-panel detector (3D scan volume 4096 cm3) or a 3D C-arm with a smaller 20 × 20–cm flat-panel detector (3D scan volume 2097 cm3), and the dose area product was available for the 3D scan. Additionally, the fluoroscopy time and the number of fluoroscopic images per 3D scan, as well as the BMI of the patients, were recorded.RESULTSThe authors compared 62 intraoperative thoracolumbar 3D scans using the 3D C-arm with a large flat-panel detector and 12 3D scans using the 3D C-arm with a small flat-panel detector. Overall, the 3D C-arm with a large flat-panel detector required more fluoroscopic images per scan (mean 389.0 ± 8.4 vs 117.0 ± 4.6, p < 0.0001), leading to a significantly higher dose area product (mean 1028.6 ± 767.9 vs 457.1 ± 118.9 cGy × cm2, p = 0.0044).CONCLUSIONSThe novel, noninvasive patient tracker mask facilitates intraoperative 3D navigation while eliminating the need for an additional skin incision with detachment of the autochthonous muscles. However, the use of this patient tracker mask requires a larger intraoperative 3D image data set for accurate registration, resulting in a 2.25 times higher radiation exposure to the patient. The use of the patient tracker mask should thus be based on an individual decision, especially taking into considering the radiation exposure and extent of instrumentation.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2021 ◽  
Vol 11 (13) ◽  
pp. 5931
Author(s):  
Ji’an You ◽  
Zhaozheng Hu ◽  
Chao Peng ◽  
Zhiqiang Wang

Large amounts of high-quality image data are the basis and premise of the high accuracy detection of objects in the field of convolutional neural networks (CNN). It is challenging to collect various high-quality ship image data based on the marine environment. A novel method based on CNN is proposed to generate a large number of high-quality ship images to address this. We obtained ship images with different perspectives and different sizes by adjusting the ships’ postures and sizes in three-dimensional (3D) simulation software, then 3D ship data were transformed into 2D ship image according to the principle of pinhole imaging. We selected specific experimental scenes as background images, and the target ships of the 2D ship images were superimposed onto the background images to generate “Simulation–Real” ship images (named SRS images hereafter). Additionally, an image annotation method based on SRS images was designed. Finally, the target detection algorithm based on CNN was used to train and test the generated SRS images. The proposed method is suitable for generating a large number of high-quality ship image samples and annotation data of corresponding ship images quickly to significantly improve the accuracy of ship detection. The annotation method proposed is superior to the annotation methods that label images with the image annotation software of Label-me and Label-img in terms of labeling the SRS images.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
◽  
Elmar Kotter ◽  
Luis Marti-Bonmati ◽  
Adrian P. Brady ◽  
Nandita M. Desouza

AbstractBlockchain can be thought of as a distributed database allowing tracing of the origin of data, and who has manipulated a given data set in the past. Medical applications of blockchain technology are emerging. Blockchain has many potential applications in medical imaging, typically making use of the tracking of radiological or clinical data. Clinical applications of blockchain technology include the documentation of the contribution of different “authors” including AI algorithms to multipart reports, the documentation of the use of AI algorithms towards the diagnosis, the possibility to enhance the accessibility of relevant information in electronic medical records, and a better control of users over their personal health records. Applications of blockchain in research include a better traceability of image data within clinical trials, a better traceability of the contributions of image and annotation data for the training of AI algorithms, thus enhancing privacy and fairness, and potentially make imaging data for AI available in larger quantities. Blockchain also allows for dynamic consenting and has the potential to empower patients and giving them a better control who has accessed their health data. There are also many potential applications of blockchain technology for administrative purposes, like keeping track of learning achievements or the surveillance of medical devices. This article gives a brief introduction in the basic technology and terminology of blockchain technology and concentrates on the potential applications of blockchain in medical imaging.


2005 ◽  
Author(s):  
D. Strobl ◽  
J. Raggam
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document