scholarly journals Canopy Volume Extraction of Citrus reticulate Blanco cv. Shatangju Trees Using UAV Image-Based Point Cloud Deep Learning

2021 ◽  
Vol 13 (17) ◽  
pp. 3437
Author(s):  
Yuan Qi ◽  
Xuhua Dong ◽  
Pengchao Chen ◽  
Kyeong-Hwan Lee ◽  
Yubin Lan ◽  
...  

Automatic acquisition of the canopy volume parameters of the Citrus reticulate Blanco cv. Shatangju tree is of great significance to precision management of the orchard. This research combined the point cloud deep learning algorithm with the volume calculation algorithm to segment the canopy of the Citrus reticulate Blanco cv. Shatangju trees. The 3D (Three-Dimensional) point cloud model of a Citrus reticulate Blanco cv. Shatangju orchard was generated using UAV tilt photogrammetry images. The segmentation effects of three deep learning models, PointNet++, MinkowskiNet and FPConv, on Shatangju trees and the ground were compared. The following three volume algorithms: convex hull by slices, voxel-based method and 3D convex hull were applied to calculate the volume of Shatangju trees. Model accuracy was evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE). The results show that the overall accuracy of the MinkowskiNet model (94.57%) is higher than the other two models, which indicates the best segmentation effect. The 3D convex hull algorithm received the highest R2 (0.8215) and the lowest RMSE (0.3186 m3) for the canopy volume calculation, which best reflects the real volume of Citrus reticulate Blanco cv. Shatangju trees. The proposed method is capable of rapid and automatic acquisition for the canopy volume of Citrus reticulate Blanco cv. Shatangju trees.

2021 ◽  
Vol 13 (9) ◽  
pp. 1859
Author(s):  
Xiangyang Liu ◽  
Yaxiong Wang ◽  
Feng Kang ◽  
Yang Yue ◽  
Yongjun Zheng

The characteristic parameters of Citrus grandis var. Longanyou canopies are important when measuring yield and spraying pesticides. However, the feasibility of the canopy reconstruction method based on point clouds has not been confirmed with these canopies. Therefore, LiDAR point cloud data for C. grandis var. Longanyou were obtained to facilitate the management of groves of this species. Then, a cloth simulation filter and European clustering algorithm were used to realize individual canopy extraction. After calculating canopy height and width, canopy reconstruction and volume calculation were realized using six approaches: by a manual method and using five algorithms based on point clouds (convex hull, CH; convex hull by slices; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS). ASBS is an innovative algorithm that combines AS with slices optimization, and can best approximate the actual canopy shape. Moreover, the CH algorithm had the shortest run time, and the R2 values of VCH, VVB, VAS, and VASBS algorithms were above 0.87. The volume with the highest accuracy was obtained from the ASBS algorithm, and the CH algorithm had the shortest computation time. In addition, a theoretical but preliminarily system suitable for the calculation of the canopy volume of C. grandis var. Longanyou was developed, which provides a theoretical reference for the efficient and accurate realization of future functional modules such as accurate plant protection, orchard obstacle avoidance, and biomass estimation.


Agronomy ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. 774 ◽  
Author(s):  
Sun ◽  
Wang ◽  
Ding ◽  
Lu ◽  
Sun

Information on fruit tree canopies is important for decision making in orchard management, including irrigation, fertilization, spraying, and pruning. An unmanned aerial vehicle (UAV) imaging system was used to establish an orchard three-dimensional (3D) point-cloud model. A row-column detection method was developed based on the probability density estimation and rapid segmentation of the point-cloud data for each apple tree, through which the tree canopy height, H, width, W, and volume, V, were determined for remote orchard canopy evaluation. When the ground sampling distance (GSD) was in the range of 2.13 to 6.69 cm/px, the orchard point-cloud model had a measurement accuracy of 100.00% for the rows and 90.86% to 98.20% for the columns. The coefficient of determination, R2, was in the range of 0.8497 to 0.9376, 0.8103 to 0.9492, and 0.8032 to 0.9148, respectively, and the average relative error was in the range of 1.72% to 3.42%, 2.18% to 4.92%, and 7.90% to 13.69%, respectively, among the H, W, and V values measured manually and by UAV photogrammetry. The results showed that UAV visual imaging is suitable for 3D morphological remote canopy evaluations, facilitates orchard canopy informatization, and contributes substantially to efficient management and control of modern standard orchards.


Horticulturae ◽  
2021 ◽  
Vol 8 (1) ◽  
pp. 21
Author(s):  
Jizhang Wang ◽  
Zhiheng Gao ◽  
Yun Zhang ◽  
Jing Zhou ◽  
Jianzhi Wu ◽  
...  

In order to realize the real-time and accurate detection of potted flowers on benches, in this paper we propose a method based on the ZED 2 stereo camera and the YOLO V4-Tiny deep learning algorithm for potted flower detection and location. First, an automatic detection model of flowers was established based on the YOLO V4-Tiny convolutional neural network (CNN) model, and the center points on the pixel plane of the flowers were obtained according to the prediction box. Then, the real-time 3D point cloud information obtained by the ZED 2 camera was used to calculate the actual position of the flowers. The test results showed that the mean average precision (MAP) and recall rate of the training model was 89.72% and 80%, respectively, and the real-time average detection frame rate of the model deployed under Jetson TX2 was 16 FPS. The results of the occlusion experiment showed that when the canopy overlap ratio between the two flowers is more than 10%, the recognition accuracy will be affected. The mean absolute error of the flower center location based on 3D point cloud information of the ZED 2 camera was 18.1 mm, and the maximum locating error of the flower center was 25.8 mm under different light radiation conditions. The method in this paper establishes the relationship between the detection target of flowers and the actual spatial location, which has reference significance for the machinery and automatic management of potted flowers on benches.


Author(s):  
Y. Ji ◽  
Y. Dong ◽  
M. Hou ◽  
Y. Qi ◽  
A. Li

Abstract. Chinese ancient architecture is a valuable heritage wealth, especially for roof that reflects the construction age, structural features and cultural connotation. Point cloud data, as a flexible representation with characteristics of fast, precise, non-contact, plays a crucial role in a variety of applications for ancient architectural heritage, such as 3D fine reconstruction, HBIM, disaster monitoring etc. However, there are still many limitations in data editing tasks that need to be worked out manually, which is time-consuming, labor-intensive and error-prone. In recent years, the theoretical advance on deep learning has stimulated the development of various domains, and digital heritage is not in exception. Whenever, deep learning algorithm need to consume a huge amount of labeled date to achieve the purpose for segmentation, resulting a actuality that high labor costs also be acquired. In this paper, inspired by the architectural style similarity between mimetic model and real building, we proposed a method supported by deep learning, which aims to give a solution for the point cloud automatic extraction of roof structure. Firstly, to generate real point cloud, Baoguang Temple, unmanned Aerial Vehicle (UAV) is presented to obtain image collections that are subsequently processed by reconstruction technology. Secondly, a modified Dynamic Graph Convolutional Neural Network (DGCNN) which can learn local features with taking advantage of an edge attention convolution is trained using simulated data and additional attributes of geometric attributes. The mimetic data is sampled from 3DMAX model surface. Finally, we try to extract roof structure of ancient building from real point clouds scenes utilizing the trained model. The experimental results show that the proposed method can extract the rooftop structure from real scene of Baoguang, which illustrates not only effectiveness of approach but also a fact that the simulated source perform potential value when real point cloud datasets are scarce.


Agriculture ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 450
Author(s):  
Yuhang Yang ◽  
Jinqian Zhang ◽  
Kangjie Wu ◽  
Xixin Zhang ◽  
Jun Sun ◽  
...  

Phenotypic analysis has always played an important role in breeding research. At present, wheat phenotypic analysis research mostly relies on high-precision instruments, which make the cost higher. Thanks to the development of 3D reconstruction technology, the reconstructed wheat 3D model can also be used for phenotypic analysis. In this paper, a method is proposed to reconstruct wheat 3D model based on semantic information. The method can generate the corresponding 3D point cloud model of wheat according to the semantic description. First, an object detection algorithm is used to detect the characteristics of some wheat phenotypes during the growth process. Second, the growth environment information and some phenotypic features of wheat are combined into semantic information. Third, text-to-image algorithm is used to generate the 2D image of wheat. Finally, the wheat in the 2D image is transformed into an abstract 3D point cloud and obtained a higher precision point cloud model using a deep learning algorithm. Extensive experiments indicate that the method reconstructs 3D models and has a heuristic effect on phenotypic analysis and breeding research by deep learning.


Author(s):  
Hsien-Yu Meng ◽  
Zhenyu Tang ◽  
Dinesh Manocha

We present a novel geometric deep learning method to compute the acoustic scattering properties of geometric objects. Our learning algorithm uses a point cloud representation of objects to compute the scattering properties and integrates them with ray tracing for interactive sound propagation in dynamic scenes. We use discrete Laplacian-based surface encoders and approximate the neighborhood of each point using a shared multi-layer perceptron. We show that our formulation is permutation invariant and present a neural network that computes the scattering function using spherical harmonics. Our approach can handle objects with arbitrary topologies and deforming models, and takes less than 1ms per object on a commodity GPU. We have analyzed the accuracy and perform validation on thousands of unseen 3D objects and highlight the benefits over other point-based geometric deep learning methods. To the best of our knowledge, this is the first real-time learning algorithm that can approximate the acoustic scattering properties of arbitrary objects with high accuracy.


Sign in / Sign up

Export Citation Format

Share Document