scholarly journals Real-Time Identification of Rice Weeds by UAV Low-Altitude Remote Sensing Based on Improved Semantic Segmentation Model

2021 ◽  
Vol 13 (21) ◽  
pp. 4370
Author(s):  
Yubin Lan ◽  
Kanghua Huang ◽  
Chang Yang ◽  
Luocheng Lei ◽  
Jiahang Ye ◽  
...  

Real-time analysis of UAV low-altitude remote sensing images at airborne terminals facilitates the timely monitoring of weeds in the farmland. Aiming at the real-time identification of rice weeds by UAV low-altitude remote sensing, two improved identification models, MobileNetV2-UNet and FFB-BiSeNetV2, were proposed based on the semantic segmentation models U-Net and BiSeNetV2, respectively. The MobileNetV2-UNet model focuses on reducing the amount of calculation of the original model parameters, and the FFB-BiSeNetV2 model focuses on improving the segmentation accuracy of the original model. In this study, we first tested and compared the segmentation accuracy and operating efficiency of the models before and after the improvement on the computer platform, and then transplanted the improved models to the embedded hardware platform Jetson AGX Xavier, and used TensorRT to optimize the model structure to improve the inference speed. Finally, the real-time segmentation effect of the two improved models on rice weeds was further verified through the collected low-altitude remote sensing video data. The results show that on the computer platform, the MobileNetV2-UNet model reduced the amount of network parameters, model size, and floating point calculations by 89.12%, 86.16%, and 92.6%, and the inference speed also increased by 2.77 times, when compared with the U-Net model. The FFB-BiSeNetV2 model improved the segmentation accuracy compared with the BiSeNetV2 model and achieved the highest pixel accuracy and mean Intersection over Union ratio of 93.09% and 80.28%. On the embedded hardware platform, the optimized MobileNetV2-UNet model and FFB-BiSeNetV2 model inferred 45.05 FPS and 40.16 FPS for a single image under the weight accuracy of FP16, respectively, both meeting the performance requirements of real-time identification. The two methods proposed in this study realize the real-time identification of rice weeds under low-altitude remote sensing by UAV, which provide a reference for the subsequent integrated operation of plant protection drones in real-time rice weed identification and precision spraying.

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 28349-28360
Author(s):  
Jiali Cai ◽  
Chunjuan Liu ◽  
Haowen Yan ◽  
Xiaosuo Wu ◽  
Wanzhen Lu ◽  
...  

Talanta ◽  
2018 ◽  
Vol 178 ◽  
pp. 743-750 ◽  
Author(s):  
О.I. Guliy ◽  
B.D. Zaitsev ◽  
I.A. Borodina ◽  
А.М. Shikhabudinov ◽  
S.А. Staroverov ◽  
...  

2016 ◽  
Vol 18 (suppl_4) ◽  
pp. iv8-iv8 ◽  
Author(s):  
B. Vaqas ◽  
M. Short ◽  
I. Patel ◽  
U. Faiz ◽  
H. Zeng ◽  
...  

2011 ◽  
Vol 160 (1) ◽  
pp. 929-935 ◽  
Author(s):  
Pu-Hong Wang ◽  
Jian-Hua Yu ◽  
Ya-Bin Zhao ◽  
Zhi-Jun Li ◽  
Guang-Qin Li

2012 ◽  
Vol 512-515 ◽  
pp. 2670-2675
Author(s):  
Yuan Bin Yu ◽  
Hai Tao Min ◽  
Xiao Dong Qu ◽  
Jun Guo

μC/OS-II is one of RTOS which has remarkable advantages, such as high reliability, high real-time ability, and easy code scalability. This paper transplanted it into BMS on electric vehicle successfully which was based on MC9S12XDP512 MCU hardware platform. Using quantitative comparison under specific tests, this paper also verified the real-time and reliability advantages of μC/OS-II.


2012 ◽  
Vol 433-440 ◽  
pp. 2571-2577
Author(s):  
Yu Qin Zhao ◽  
Run Shen Zhang ◽  
Yong Kun Li ◽  
Jun Jie Zang

The hardware and programming of lane boundary identification system, which based on the hardware platform of DM642, designed. The system complete the functions of acquiring image data, optimizing acquired data by ant algorithm, comparing value by the objective function, exporting image of lane boundary and so on. For the sake of improving the real time capability further, optimize code by way of methods, which include the optimization of compiler, intrinsic function, packaging data, using pipeline technique, and assembly code, giving specific optimization methods and real time capability of algorithm affected by each process. As a result the identification time of system reduced from 101.75 ms to 20.55ms, improved the system of real time capability effectively, laying a good foundation for industry.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhuangzhuang Sun ◽  
Yunlin Song ◽  
Qing Li ◽  
Jian Cai ◽  
Xiao Wang ◽  
...  

Patchy stomata are a common and characteristic phenomenon in plants. Understanding and studying the regulation mechanism of patchy stomata are of great significance to further supplement and improve the stomatal theory. Currently, the common methods for stomatal behavior observation are based on static images, which makes it difficult to reflect dynamic changes of stomata. With the rapid development of portable microscopes and computer vision algorithms, it brings new chances for stomatal movement observation. In this study, a stomatal behavior observation system (SBOS) was proposed for real-time observation and automatic analysis of each single stoma in wheat leaf using object tracking and semantic segmentation methods. The SBOS includes two modules: the real-time observation module and the automatic analysis module. The real-time observation module can shoot videos of stomatal dynamic changes. In the automatic analysis module, object tracking locates every single stoma accurately to obtain stomatal pictures arranged in time-series; semantic segmentation can precisely quantify the stomatal opening area (SOA), with a mean pixel accuracy (MPA) of 0.8305 and a mean intersection over union (MIoU) of 0.5590 in the testing set. Moreover, we designed a graphical user interface (GUI) so that researchers could use this automatic analysis module smoothly. To verify the performance of the SBOS, the dynamic changes of stomata were observed and analyzed under chilling. Finally, we analyzed the correlation between gas exchange and SOA under drought stress, and the correlation coefficients between mean SOA and net photosynthetic rate (Pn), intercellular CO2 concentration (Ci), stomatal conductance (Gs), and transpiration rate (Tr) are 0.93, 0.96, 0.96, and 0.97.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Shida Zhao ◽  
Guangzhao Hao ◽  
Yichi Zhang ◽  
Shucai Wang

How to realize the accurate recognition of 3 parts of sheep carcass is the key to the research of mutton cutting robots. The characteristics of each part of the sheep carcass are connected to each other and have similar features, which make it difficult to identify and detect, but with the development of image semantic segmentation technology based on deep learning, it is possible to explore this technology for real-time recognition of the 3 parts of the sheep carcass. Based on the ICNet, we propose a real-time semantic segmentation method for sheep carcass images. We first acquire images of the sheep carcass and use augmentation technology to expand the image data, after normalization, using LabelMe to annotate the image and build the sheep carcass image dataset. After that, we establish the ICNet model and train it with transfer learning. The segmentation accuracy, MIoU, and the average processing time of single image are then obtained and used as the evaluation standard of the segmentation effect. In addition, we verify the generalization ability of the ICNet for the sheep carcass image dataset by setting different brightness image segmentation experiments. Finally, the U-Net, DeepLabv3, PSPNet, and Fast-SCNN are introduced for comparative experiments to further verify the segmentation performance of the ICNet. The experimental results show that for the sheep carcass image datasets, the segmentation accuracy and MIoU of our method are 97.68% and 88.47%, respectively. The single image processing time is 83 ms. Besides, the MIoU of U-Net and DeepLabv3 is 0.22% and 0.03% higher than the ICNet, but the processing time of a single image is longer by 186 ms and 430 ms. Besides, compared with the PSPNet and Fast-SCNN, the MIoU of the ICNet model is increased by 1.25% and 4.49%, respectively. However, the processing time of a single image is shorter by 469 ms and expands by 7 ms, respectively.


Sign in / Sign up

Export Citation Format

Share Document