scholarly journals A Deep-Learning-Based Real-Time Detector for Grape Leaf Diseases Using Improved Convolutional Neural Networks

2020 ◽  
Vol 11 ◽  
Author(s):  
Xiaoyue Xie ◽  
Yuan Ma ◽  
Bin Liu ◽  
Jinrong He ◽  
Shuqin Li ◽  
...  
2021 ◽  
Vol 13 (3) ◽  
pp. 809-820
Author(s):  
V. Sowmya ◽  
R. Radha

Vehicle detection and recognition require demanding advanced computational intelligence and resources in a real-time traffic surveillance system for effective traffic management of all possible contingencies. One of the focus areas of deep intelligent systems is to facilitate vehicle detection and recognition techniques for robust traffic management of heavy vehicles. The following are such sophisticated mechanisms: Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Regional Convolutional Neural Networks (R-CNN), You Only Look Once (YOLO) model, etcetera. Accordingly, it is pivotal to choose the precise algorithm for vehicle detection and recognition, which also addresses the real-time environment. In this study, a comparison of deep learning algorithms, such as the Faster R-CNN, YOLOv2, YOLOv3, and YOLOv4, are focused on diverse aspects of the features. Two entities for transport heavy vehicles, the buses and trucks, constitute detection and recognition elements in this proposed work. The mechanics of data augmentation and transfer-learning is implemented in the model; to build, execute, train, and test for detection and recognition to avoid over-fitting and improve speed and accuracy. Extensive empirical evaluation is conducted on two standard datasets such as COCO and PASCAL VOC 2007. Finally, comparative results and analyses are presented based on real-time.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 59069-59080 ◽  
Author(s):  
Peng Jiang ◽  
Yuehan Chen ◽  
Bin Liu ◽  
Dongjian He ◽  
Chunquan Liang

Author(s):  
Robinson Jiménez-Moreno ◽  
Javier Orlando Pinzón-Arenas ◽  
César Giovany Pachón-Suescún

This article presents a work oriented to assistive robotics, where a scenario is established for a robot to reach a tool in the hand of a user, when they have verbally requested it by his name. For this, three convolutional neural networks are trained, one for recognition of a group of tools, which obtained an accuracy of 98% identifying the tools established for the application, that are scalpel, screwdriver and scissors; one for speech recognition, trained with the names of the tools in Spanish language, where its validation accuracy reach a 97.5% in the recognition of the words; and another for recognition of the user's hand, taking in consideration the classification of 2 gestures: Open and Closed hand, where a 96.25% accuracy was achieved. With those networks, tests in real time are performed, presenting results in the delivery of each tool with a 100% of accuracy, i.e. the robot was able to identify correctly what the user requested, recognize correctly each tool and deliver the one need when the user opened their hand, taking an average time of 45 seconds in the execution of the application.


2020 ◽  
Vol 8 (6) ◽  
pp. 4781-4784

Dermatological diseases are found to induce a serious impact on the health of millions of people as everyone is affected by almost all types of skin disorders every year. Since the human analysis of such diseases takes some time and effort, and current methods are only used to analyse singular types of skin diseases, there is a need for a more high-level computer-aided expertise in the analysis and diagnosis of multi-type skin diseases. This paper proposes an approach to use computer-aided techniques in deep learning neural networks such as Convolutional neural networks (CNN) and Residual Neural Networks (ResNet) to predict skin diseases real-time and thus provides more accuracy than other neural networks.


Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 157 ◽  
Author(s):  
Daniel S. Berman

Domain generation algorithms (DGAs) represent a class of malware used to generate large numbers of new domain names to achieve command-and-control (C2) communication between the malware program and its C2 server to avoid detection by cybersecurity measures. Deep learning has proven successful in serving as a mechanism to implement real-time DGA detection, specifically through the use of recurrent neural networks (RNNs) and convolutional neural networks (CNNs). This paper compares several state-of-the-art deep-learning implementations of DGA detection found in the literature with two novel models: a deeper CNN model and a one-dimensional (1D) Capsule Networks (CapsNet) model. The comparison shows that the 1D CapsNet model performs as well as the best-performing model from the literature.


Author(s):  
A. Milioto ◽  
P. Lottes ◽  
C. Stachniss

UAVs are becoming an important tool for field monitoring and precision farming. A prerequisite for observing and analyzing fields is the ability to identify crops and weeds from image data. In this paper, we address the problem of detecting the sugar beet plants and weeds in the field based solely on image data. We propose a system that combines vegetation detection and deep learning to obtain a high-quality classification of the vegetation in the field into value crops and weeds. We implemented and thoroughly evaluated our system on image data collected from different sugar beet fields and illustrate that our approach allows for accurately identifying the weeds on the field.


Author(s):  
Juanjuan Hu ◽  
Jiawei Luo ◽  
Jia Ren ◽  
Lan Lan ◽  
Ying Zhang ◽  
...  

Objectives The study was to apply deep learning (DL) with convolutional neural networks (CNNs) to laryngoscopic imaging for assisting in real-time automated segmentation and classification of vocal cord leukoplakia. Methods This was a single-center retrospective diagnostic study included 216 patients who underwent laryngoscope and pathological examination from October 1, 2018 through October 1, 2019. Lesions were classified as nonsurgical group (NSG) and surgical group (SG) according to pathology. All selected images of vocal cord leukoplakia were annotated independently by 2 expert endoscopists and divided into a training set, a validation set, and a test set in a ratio of 6:2:2 for training the model. Results Among the 260 lesions identified in 216 patients, 2220 images from narrow band imaging (NBI) and 2144 images from white light imaging (WLI) were selected. For segmentation, the average intersection-over-union (IoU) value exceeded 70%. For classification, the model was able to classify the surgical group (SG) by laryngoscope with a sensitivity of 0.93 and specificity of 0.94 in WLI, and a sensitivity of 0.99 and specificity of 0.97 in NBI. Moreover, this model achieved a mean average precision (mAP) of 0.81 in WLI and 0.92 in NBI with an IoU> 0.5. Conclusions The study found that a model developed by applying DL with CNNs to laryngoscopic imaging results in high sensitivity, specificity, and mAP for automated segmentation and classification of vocal cord leukoplakia. This finding shows promise for the application of DL with CNNs in assisting in accurate diagnosis of vocal cord leukoplakia from WLI and NBI.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yange Li ◽  
Han Wei ◽  
Zheng Han ◽  
Jianling Huang ◽  
Weidong Wang

Visual examination of the workplace and in-time reminder to the failure of wearing a safety helmet is of particular importance to avoid injuries of workers at the construction site. Video monitoring systems provide a large amount of unstructured image data on-site for this purpose, however, requiring a computer vision-based automatic solution for real-time detection. Although a growing body of literature has developed many deep learning-based models to detect helmet for the traffic surveillance aspect, an appropriate solution for the industry application is less discussed in view of the complex scene on the construction site. In this regard, we develop a deep learning-based method for the real-time detection of a safety helmet at the construction site. The presented method uses the SSD-MobileNet algorithm that is based on convolutional neural networks. A dataset containing 3261 images of safety helmets collected from two sources, i.e., manual capture from the video monitoring system at the workplace and open images obtained using web crawler technology, is established and released to the public. The image set is divided into a training set, validation set, and test set, with a sampling ratio of nearly 8 : 1 : 1. The experiment results demonstrate that the presented deep learning-based model using the SSD-MobileNet algorithm is capable of detecting the unsafe operation of failure of wearing a helmet at the construction site, with satisfactory accuracy and efficiency.


Sign in / Sign up

Export Citation Format

Share Document