scholarly journals Automated Identification of Crop Tree Crowns from UAV Multispectral Imagery by Means of Morphological Image Analysis

2020 ◽  
Vol 12 (5) ◽  
pp. 748
Author(s):  
Ricardo Sarabia ◽  
Arturo Aquino ◽  
Juan Manuel Ponce ◽  
Gilberto López ◽  
José Manuel Andújar

Within the context of precision agriculture, goods insurance, public subsidies, fire damage assessment, etc., accurate knowledge about the plant population in crops represents valuable information. In this regard, the use of Unmanned Aerial Vehicles (UAVs) has proliferated as an alternative to traditional plant counting methods, which are laborious, time demanding and prone to human error. Hence, a methodology for the automated detection, geolocation and counting of crop trees in intensive cultivation orchards from high resolution multispectral images, acquired by UAV-based aerial imaging, is proposed. After image acquisition, the captures are processed by means of photogrammetry to yield a 3D point cloud-based representation of the study plot. To exploit the elevation information contained in it and eventually identify the plants, the cloud is deterministically interpolated, and subsequently transformed into a greyscale image. This image is processed, by using mathematical morphology techniques, in such a way that the absolute height of the trees with respect to their local surroundings is exploited to segment the tree pixel-regions, by global statistical thresholding binarization. This approach makes the segmentation process robust against surfaces with elevation variations of any magnitude, or to possible distracting artefacts with heights lower than expected. Finally, the segmented image is analysed by means of an ad-hoc moment representation-based algorithm to estimate the location of the trees. The methodology was tested in an intensive olive orchard of 17.5 ha, with a population of 3919 trees. Because of the plot’s plant density and tree spacing pattern, typical of intensive plantations, many occurrences of intra-row tree aggregations were observed, increasing the complexity of the scenario under study. Notwithstanding, it was achieved a precision of 99.92%, a sensibility of 99.67% and an F-score of 99.75%, thus correctly identifying and geolocating 3906 plants. The generated 3D point cloud reported root-mean square errors (RMSE) in the X, Y and Z directions of 0.73 m, 0.39 m and 1.20 m, respectively. These results support the viability and robustness of this methodology as a phenotyping solution for the automated plant counting and geolocation in olive orchards.

2018 ◽  
Vol 155 ◽  
pp. 84-95 ◽  
Author(s):  
Lorenzo Comba ◽  
Alessandro Biglia ◽  
Davide Ricauda Aimonino ◽  
Paolo Gay

Author(s):  
Cao Xuan Cuong ◽  
Le Van Canh ◽  
Pham Van Chung ◽  
Le Duc Tinh ◽  
Pham Trung Dung ◽  
...  

Purpose. The main objective of this paper is to assess the quality of the 3D model of industrial buildings generated from Unmanned Aerial Vehicle (UAV) imagery datasets, including nadir (N), oblique (O), and Nadir and Oblique (N+O) UAV datasets. Methodology. The quality of a 3D model is defined by the accuracy and density of point clouds created from UAV images. For this purpose, the UAV was deployed to acquire images with both O and N flight modes over an industrial mining area containing a mine shaft tower, factory housing and office buildings. The quality assessment was conducted for the 3D point cloud model of three main objects such as roofs, facades, and ground surfaces using CheckPoints (CPs) and terrestrial laser scanning (TLS) point clouds as the reference datasets. The Root Mean Square Errors (RMSE) were calculated using CP coordinates, and cloud to cloud distances were computed using TLS point clouds, which were used for the accuracy assessment. Findings. The results showed that the point cloud model generated by the N flight mode was the most accurate but least dense, whereas that of the O mode was the least accurate but most detailed level in comparison with the others. Also, the combination of O and N datasets takes advantages of individual mode as the point clouds accuracy is higher than that of case O, and its density is much higher than that of case N. Therefore, it is optimal to build exceptional accurate and dense point clouds of buildings. Originality. The paper provides a comparative analysis in quality of point cloud of roofs and facades generated from UAV photogrammetry for mining industrial buildings. Practical value. Findings of the study can be used as references for both UAV survey practices and applications of UAV point cloud. The paper provides useful information for making UAV flight planning, or which UAV points should be integrated into TLS points to have the best point cloud.


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


2021 ◽  
Vol 13 (4) ◽  
pp. 803
Author(s):  
Lingchen Lin ◽  
Kunyong Yu ◽  
Xiong Yao ◽  
Yangbo Deng ◽  
Zhenbang Hao ◽  
...  

As a key canopy structure parameter, the estimation method of the Leaf Area Index (LAI) has always attracted attention. To explore a potential method to estimate forest LAI from 3D point cloud at low cost, we took photos from different angles of the drone and set five schemes (O (0°), T15 (15°), T30 (30°), OT15 (0° and 15°) and OT30 (0° and 30°)), which were used to reconstruct 3D point cloud of forest canopy based on photogrammetry. Subsequently, the LAI values and the leaf area distribution in the vertical direction derived from five schemes were calculated based on the voxelized model. Our results show that the serious lack of leaf area in the middle and lower layers determines that the LAI estimate of O is inaccurate. For oblique photogrammetry, schemes with 30° photos always provided better LAI estimates than schemes with 15° photos (T30 better than T15, OT30 better than OT15), mainly reflected in the lower part of the canopy, which is particularly obvious in low-LAI areas. The overall structure of the single-tilt angle scheme (T15, T30) was relatively complete, but the rough point cloud details could not reflect the actual situation of LAI well. Multi-angle schemes (OT15, OT30) provided excellent leaf area estimation (OT15: R2 = 0.8225, RMSE = 0.3334 m2/m2; OT30: R2 = 0.9119, RMSE = 0.1790 m2/m2). OT30 provided the best LAI estimation accuracy at a sub-voxel size of 0.09 m and the best checkpoint accuracy (OT30: RMSE [H] = 0.2917 m, RMSE [V] = 0.1797 m). The results highlight that coupling oblique photography and nadiral photography can be an effective solution to estimate forest LAI.


Sign in / Sign up

Export Citation Format

Share Document