scholarly journals Research Progress of Crop Disease Image Recognition Based on Wireless Network Communication and Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Yuanhui Yu

The traditional digital image processing technology has its limitations. It requires manual design features, which consumes manpower and material resources, and identifies crops with a single type, and the results are bad. Therefore, to find an efficient and fast real-time disease image recognition method is very meaningful. Deep learning is a machine learning algorithm that can automatically learn representative features to achieve better results in areas of image recognition. Therefore, the purpose of this paper is to use deep learning methods to identify crop pests and diseases and to find efficient and fast real-time image recognition methods of disease. Deep learning is a newly developed discipline in recent years. Its purpose is to study how to actively obtain a variety of feature representation methods from data samples and rely on data-driven methods, a series of nonlinear transformations are applied to finally collect the original data from specific to abstract, from general to specified semantics, and from low-level to high-level characteristic forms. This paper analyzes the classical and the latest neural network structure based on the theory of deep learning. For the problem that the network based on natural image classification is not suitable for crop pest and disease identification tasks, this paper has improved the network structure that can take care of both recognition speed and recognition accuracy. We discussed the influence of the crop pest and disease feature extraction layer on recognition performance. Finally, we used the inner layer as the main structure to be the pest and disease feature extraction layer by comparing the advantages and disadvantages of the inner and global average pooling layers. We analyze various loss functions such as Softmax Loss, Center Loss, and Angular Softmax Loss for pest identification. In view of the shortcomings of difficulty in loss function training, convergence, and operation, making the distance between pests and diseases smaller and the distance between classes more greater improved the loss function and introduced techniques such as feature normalization and weight normalization. The experimental results show that the method can effectively enhance the characteristic expression ability of pests and diseases and thus improve the recognition rate of pests and diseases. Moreover, the method makes the pest identification network training simpler and can improve the pest and disease recognition rate better.

2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


2021 ◽  
Vol 3 (3) ◽  
pp. 276-290
Author(s):  
I Jeena Jacob ◽  
P Ebby Darney

The Internet of Things (IoT) is an ecosystem comprised of multiple devices and connections, a large number of users, and a massive amount of data. Deep learning is especially suited for these scenarios due to its appropriateness for "big data" difficulties and future concerns. Nonetheless, guaranteeing security and privacy has emerged as a critical challenge for IoT administration. In many recent cases, deep learning algorithms have proven to be increasingly efficient in performing security assessments for IoT devices without resorting to handcrafted rules. This research work integrates principal component analysis (PCA) for feature extraction with superior performance. Besides, the primary objective of this research work is to gather a comprehensive survey data on the types of IoT deployments, along with security and privacy challenges with good recognition rate. The deep learning method is performed through PCA feature extraction for improving the accuracy of the process. Our other primary goal in this study paper is to achieve a high recognition rate for IoT based image recognition. The CNN approach was trained and evaluated on the IoT image dataset for performance evaluation using multiple methodologies. The initial step would be to investigate the application of deep learning for IoT image acquisition. Additionally, when it comes to IoT image registering, the usefulness of the deep learning method has been evaluated for increasing the appropriateness of image recognition with good testing accuracy. The research discoveries on the application of deep learning in the Internet of Things (IoT) system are summarized in an image-based identification method that introduces a variety of appropriate criteria.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 555
Author(s):  
Jui-Sheng Chou ◽  
Chia-Hsuan Liu

Sand theft or illegal mining in river dredging areas has been a problem in recent decades. For this reason, increasing the use of artificial intelligence in dredging areas, building automated monitoring systems, and reducing human involvement can effectively deter crime and lighten the workload of security guards. In this investigation, a smart dredging construction site system was developed using automated techniques that were arranged to be suitable to various areas. The aim in the initial period of the smart dredging construction was to automate the audit work at the control point, which manages trucks in river dredging areas. Images of dump trucks entering the control point were captured using monitoring equipment in the construction area. The obtained images and the deep learning technique, YOLOv3, were used to detect the positions of the vehicle license plates. Framed images of the vehicle license plates were captured and were used as input in an image classification model, C-CNN-L3, to identify the number of characters on the license plate. Based on the classification results, the images of the vehicle license plates were transmitted to a text recognition model, R-CNN-L3, that corresponded to the characters of the license plate. Finally, the models of each stage were integrated into a real-time truck license plate recognition (TLPR) system; the single character recognition rate was 97.59%, the overall recognition rate was 93.73%, and the speed was 0.3271 s/image. The TLPR system reduces the labor force and time spent to identify the license plates, effectively reducing the probability of crime and increasing the transparency, automation, and efficiency of the frontline personnel’s work. The TLPR is the first step toward an automated operation to manage trucks at the control point. The subsequent and ongoing development of system functions can advance dredging operations toward the goal of being a smart construction site. By intending to facilitate an intelligent and highly efficient management system of dredging-related departments by providing a vehicle LPR system, this paper forms a contribution to the current body of knowledge in the sense that it presents an objective approach for the TLPR system.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2107
Author(s):  
Xin Wei ◽  
Huan Wan ◽  
Fanghua Ye ◽  
Weidong Min

In recent years, medical image segmentation (MIS) has made a huge breakthrough due to the success of deep learning. However, the existing MIS algorithms still suffer from two types of uncertainties: (1) the uncertainty of the plausible segmentation hypotheses and (2) the uncertainty of segmentation performance. These two types of uncertainties affect the effectiveness of the MIS algorithm and then affect the reliability of medical diagnosis. Many studies have been done on the former but ignore the latter. Therefore, we proposed the hierarchical predictable segmentation network (HPS-Net), which consists of a new network structure, a new loss function, and a cooperative training mode. According to our knowledge, HPS-Net is the first network in the MIS area that can generate both the diverse segmentation hypotheses to avoid the uncertainty of the plausible segmentation hypotheses and the measure predictions about these hypotheses to avoid the uncertainty of segmentation performance. Extensive experiments were conducted on the LIDC-IDRI dataset and the ISIC2018 dataset. The results show that HPS-Net has the highest Dice score compared with the benchmark methods, which means it has the best segmentation performance. The results also confirmed that the proposed HPS-Net can effectively predict TNR and TPR.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042007
Author(s):  
Xiaowen Liu ◽  
Juncheng Lei

Abstract Image recognition technology mainly includes image feature extraction and classification recognition. Feature extraction is the key link, which determines whether the recognition performance is good or bad. Deep learning builds a model by building a hierarchical model structure like the human brain, extracting features layer by layer from the data. Applying deep learning to image recognition can further improve the accuracy of image recognition. Based on the idea of clustering, this article establishes a multi-mix Gaussian model for engineering image information in RGB color space through offline learning and expectation-maximization algorithms, to obtain a multi-mix cluster representation of engineering image information. Then use the sparse Gaussian machine learning model on the YCrCb color space to quickly learn the distribution of engineering images online, and design an engineering image recognizer based on multi-color space information.


2020 ◽  
Vol 79 (41-42) ◽  
pp. 31027-31047
Author(s):  
Raj Silwal ◽  
Abeer Alsadoon ◽  
P. W. C. Prasad ◽  
Omar Hisham Alsadoon ◽  
Ammar Al-Qaraghuli

2020 ◽  
Vol 79 (37-38) ◽  
pp. 27867-27890 ◽  
Author(s):  
Bishal Bhandari ◽  
Abeer Alsadoon ◽  
P. W. C. Prasad ◽  
Salma Abdullah ◽  
Sami Haddad

Energies ◽  
2020 ◽  
Vol 13 (22) ◽  
pp. 6104
Author(s):  
Bernardo Calabrese ◽  
Ramiro Velázquez ◽  
Carolina Del-Valle-Soto ◽  
Roberto de Fazio ◽  
Nicola Ivan Giannoccaro ◽  
...  

This paper introduces a novel low-cost solar-powered wearable assistive technology (AT) device, whose aim is to provide continuous, real-time object recognition to ease the finding of the objects for visually impaired (VI) people in daily life. The system consists of three major components: a miniature low-cost camera, a system on module (SoM) computing unit, and an ultrasonic sensor. The first is worn on the user’s eyeglasses and acquires real-time video of the nearby space. The second is worn as a belt and runs deep learning-based methods and spatial algorithms which process the video coming from the camera performing objects’ detection and recognition. The third assists on positioning the objects found in the surrounding space. The developed device provides audible descriptive sentences as feedback to the user involving the objects recognized and their position referenced to the user gaze. After a proper power consumption analysis, a wearable solar harvesting system, integrated with the developed AT device, has been designed and tested to extend the energy autonomy in the different operating modes and scenarios. Experimental results obtained with the developed low-cost AT device have demonstrated an accurate and reliable real-time object identification with an 86% correct recognition rate and 215 ms average time interval (in case of high-speed SoM operating mode) for the image processing. The proposed system is capable of recognizing the 91 objects offered by the Microsoft Common Objects in Context (COCO) dataset plus several custom objects and human faces. In addition, a simple and scalable methodology for using image datasets and training of Convolutional Neural Networks (CNNs) is introduced to add objects to the system and increase its repertory. It is also demonstrated that comprehensive trainings involving 100 images per targeted object achieve 89% recognition rates, while fast trainings with only 12 images achieve acceptable recognition rates of 55%.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 982 ◽  
Author(s):  
Hyo Lee ◽  
Ihsan Ullah ◽  
Weiguo Wan ◽  
Yongbin Gao ◽  
Zhijun Fang

Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications.


Sign in / Sign up

Export Citation Format

Share Document