scholarly journals Early real-time detection algorithm of tomato diseases and pests in the natural environment

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Xuewei Wang ◽  
Jun Liu ◽  
Xiaoning Zhu

Abstract Background Research on early object detection methods of crop diseases and pests in the natural environment has been an important research direction in the fields of computer vision, complex image processing and machine learning. Because of the complexity of the early images of tomato diseases and pests in the natural environment, the traditional methods can not achieve real-time and accurate detection. Results Aiming at the complex background of early period of tomato diseases and pests image objects in the natural environment, an improved object detection algorithm based on YOLOv3 for early real-time detection of tomato diseases and pests was proposed. Firstly, aiming at the complex background of tomato diseases and pests images under natural conditions, dilated convolution layer is used to replace convolution layer in backbone network to maintain high resolution and receptive field and improve the ability of small object detection. Secondly, in the detection network, according to the size of candidate box intersection ratio (IOU) and linear attenuation confidence score predicted by multiple grids, the obscured objects of tomato diseases and pests are retained, and the detection problem of mutual obscure objects of tomato diseases and pests is solved. Thirdly, to reduce the model volume and reduce the model parameters, the network is lightweight by using the idea of convolution factorization. Finally, by introducing a balance factor, the small object weight in the loss function is optimized. The test results of nine common tomato diseases and pests under six different background conditions are statistically analyzed. The proposed method has a F1 value of 94.77%, an AP value of 91.81%, a false detection rate of only 2.1%, and a detection time of only 55 Ms. The test results show that the method is suitable for early detection of tomato diseases and pests using large-scale video images collected by the agricultural Internet of Things. Conclusions At present, most of the object detection of diseases and pests based on computer vision needs to be carried out in a specific environment (such as picking the leaves of diseases and pests and placing them in the environment with light supplement equipment, so as to achieve the best environment). For the images taken by the Internet of things monitoring camera in the field, due to various factors such as light intensity, weather change, etc., the images are very different, the existing methods cannot work reliably. The proposed method has been applied to the actual tomato production scenarios, showing good detection performance. The experimental results show that the method in this study improves the detection effect of small objects and leaves occlusion, and the recognition effect under different background conditions is better than the existing object detection algorithms. The results show that the method is feasible to detect tomato diseases and pests in the natural environment.

Author(s):  
Wei Sun ◽  
Liang Dai ◽  
Xiaorui Zhang ◽  
Pengshuai Chang ◽  
Xiaozheng He

2021 ◽  
pp. 1-26
Author(s):  
E. Çetin ◽  
C. Barrado ◽  
E. Pastor

Abstract The number of unmanned aerial vehicles (UAVs, also known as drones) has increased dramatically in the airspace worldwide for tasks such as surveillance, reconnaissance, shipping and delivery. However, a small number of them, acting maliciously, can raise many security risks. Recent Artificial Intelligence (AI) capabilities for object detection can be very useful for the identification and classification of drones flying in the airspace and, in particular, are a good solution against malicious drones. A number of counter-drone solutions are being developed, but the cost of drone detection ground systems can also be very high, depending on the number of sensors deployed and powerful fusion algorithms. We propose a low-cost counter-drone solution composed uniquely by a guard-drone that should be able to detect, locate and eliminate any malicious drone. In this paper, a state-of-the-art object detection algorithm is used to train the system to detect drones. Three existing object detection models are improved by transfer learning and tested for real-time drone detection. Training is done with a new dataset of drone images, constructed automatically from a very realistic flight simulator. While flying, the guard-drone captures random images of the area, while at the same time, a malicious drone is flying too. The drone images are auto-labelled using the location and attitude information available in the simulator for both drones. The world coordinates for the malicious drone position must then be projected into image pixel coordinates. The training and test results show a minimum accuracy improvement of 22% with respect to state-of-the-art object detection models, representing promising results that enable a step towards the construction of a fully autonomous counter-drone system.


2020 ◽  
Vol 226 ◽  
pp. 02020
Author(s):  
Alexey V. Stadnik ◽  
Pavel S. Sazhin ◽  
Slavomir Hnatic

The performance of neural networks is one of the most important topics in the field of computer vision. In this work, we analyze the speed of object detection using the well-known YOLOv3 neural network architecture in different frameworks under different hardware requirements. We obtain results, which allow us to formulate preliminary qualitative conclusions about the feasibility of various hardware scenarios to solve tasks in real-time environments.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zuopeng Zhao ◽  
Zhongxin Zhang ◽  
Xinzheng Xu ◽  
Yi Xu ◽  
Hualin Yan ◽  
...  

It is necessary to improve the performance of the object detection algorithm in resource-constrained embedded devices by lightweight improvement. In order to further improve the recognition accuracy of the algorithm for small target objects, this paper integrates 5 × 5 deep detachable convolution kernel on the basis of MobileNetV2-SSDLite model, extracts features of two special convolutional layers in addition to detecting the target, and designs a new lightweight object detection network—Lightweight Microscopic Detection Network (LMS-DN). The network can be implemented on embedded devices such as NVIDIA Jetson TX2. The experimental results show that LMS-DN only needs fewer parameters and calculation costs to obtain higher identification accuracy and stronger anti-interference than other popular object detection models.


Author(s):  
Anwaar Ulhaq

Invasive species are significant threats to global agriculture and food security being the major causes of crop loss. An operative biosecurity policy requires full automation of detection and habitat identification of the potential pests and pathogens. Unmanned Aerial Vehicles (UAVs) mounted thermal imaging cameras can observe and detect pest animals and their habitats, and estimate their population size around the clock. However, their effectiveness becomes limited due to manual detection of cryptic species in hours of captured flight videos, failure in habitat disclosure and the requirement of expensive high-resolution cameras. Therefore, the cost and efficiency trade-off often restricts the use of these systems. In this paper, we present an invasive animal species detection system that uses cost-effectiveness of consumer-level cameras while harnessing the power of transfer learning and an optimised small object detection algorithm. Our proposed optimised object detection algorithm named Optimised YOLO (OYOLO) enhances YOLO (You Only Look Once) by improving its training and structure for remote detection of elusive targets. Our system, trained on the massive data collected from New South Wales and Western Australia, can detect invasive species (rabbits, Kangaroos and pigs) in real-time with a higher probability of detection (85–100 %), compared to the manual detection. This work will enhance the visual analysis of pest species while performing well on low, medium and high-resolution thermal imagery, and equally accessible to all stakeholders and end-users in Australia via a public cloud.


Sign in / Sign up

Export Citation Format

Share Document