Apple tree canopy leaf spatial location automated extraction based on point cloud data

2019 ◽  
Vol 166 ◽  
pp. 104975 ◽  
Author(s):  
Cailing Guo ◽  
Gang Liu ◽  
Weijie Zhang ◽  
Juan Feng
Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4252
Author(s):  
Chenchen Gu ◽  
Changyuan Zhai ◽  
Xiu Wang ◽  
Songlin Wang

Canopy characterization detection is essential for target-oriented spray, which minimizes pesticide residues in fruits, pesticide wastage, and pollution. In this study, a novel canopy meshing-profile characterization (CMPC) method based on light detection and ranging (LiDAR)point-cloud data was designed for high-precision canopy volume calculations. First, the accuracy and viability of this method were tested using a simulated canopy. The results show that the CMPC method can accurately characterize the 3D profiles of the simulated canopy. These simulated canopy profiles were similar to those obtained from manual measurements, and the measured canopy volume achieved an accuracy of 93.3%. Second, the feasibility of the method was verified by a field experiment where the canopy 3D stereogram and cross-sectional profiles were obtained via CMPC. The results show that the 3D stereogram exhibited a high degree of similarity with the tree canopy, although there were some differences at the edges, where the canopy was sparse. The CMPC-derived cross-sectional profiles matched the manually measured results well. The CMPC method achieved an accuracy of 96.3% when the tree canopy was detected by LiDAR at a moving speed of 1.2 m/s. The accuracy of the LiDAR system was virtually unchanged when the moving speeds was reduced to 1 m/s. No detection lag was observed when comparing the start and end positions of the cross-section. Different CMPC grid sizes were also evaluated. Small grid sizes (0.01 m × 0.01 m and 0.025 m × 0.025 m) were suitable for characterizing the finer details of a canopy, whereas grid sizes of 0.1 m × 0.1 m or larger can be used for characterizing its overall profile and volume. The results of this study can be used as a technical reference for the development of a LiDAR-based target-oriented spray system.


2017 ◽  
Author(s):  
Weijie Zhang ◽  
Gang Liu ◽  
Cailing Guo ◽  
Ze Zong ◽  
Xue Zhang

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

2019 ◽  
Author(s):  
Byeongjun Oh ◽  
Minju Kim ◽  
Chanwoo Lee ◽  
Hunhee Cho ◽  
Kyung-In Kang

Sign in / Sign up

Export Citation Format

Share Document