scholarly journals A Multi-View Dense Point Cloud Generation Algorithm Based on Low-Altitude Remote Sensing Images

2016 ◽  
Vol 8 (5) ◽  
pp. 381 ◽  
Author(s):  
Zhenfeng Shao ◽  
Nan Yang ◽  
Xiongwu Xiao ◽  
Lei Zhang ◽  
Zhe Peng
Author(s):  
W. Yuan ◽  
X. Yuan ◽  
Z. Fan ◽  
Z. Guo ◽  
X. Shi ◽  
...  

Abstract. Building Change Detection (BCD) via multi-temporal remote sensing images is essential for various applications such as urban monitoring, urban planning, and disaster assessment. However, most building change detection approaches only extract features from different kinds of remote sensing images for change index determination, which can not determine the insignificant changes of small buildings. Given co-registered multi-temporal remote sensing images, the illumination variations and misregistration errors always lead to inaccurate change detection results. This study investigates the applicability of multi-feature fusion from both directly extract 2D features from remote sensing images and 3D features extracted by the dense image matching (DIM) generated 3D point cloud for accurate building change index generation. This paper introduces a graph neural network (GNN) based end-to-end learning framework for building change detection. The proposed framework includes feature extraction, feature fusion, and change index prediction. It starts with a pre-trained VGG-16 network as a backend and uses U-net architecture with five layers for feature map extraction. The extracted 2D features and 3D features are utilized as input into GNN based feature fusion parts. In the GNN parts, we introduce a flexible context aggregation mechanism based on attention to address the illumination variations and misregistration errors, enabling the framework to reason about the image-based texture information and depth information introduced by DIM generated 3D point cloud jointly. After that, the GNN generated affinity matrix is utilized for change index determination through a Hungarian algorithm. The experiment conducted on a dataset that covered Setagaya-Ku, Tokyo area, shows that the proposed method generated change map achieved the precision of 0.762 and the F1-score of 0.68 at pixel-level. Compared to traditional image-based change detection methods, our approach learns prior over geometrical structure information from the real 3D world, which robust to the misregistration errors. Compared to CNN based methods, the proposed method learns to fuse 2D and 3D features together to represent more comprehensive information for building change index determination. The experimental comparison results demonstrated that the proposed approach outperforms the traditional methods and CNN based methods.


2019 ◽  
Vol 31 (1) ◽  
pp. 39
Author(s):  
Yongchuan Zheng ◽  
Boliang Guan ◽  
Shujin Lin ◽  
Xiaonan Luo ◽  
Ruomei Wang

Author(s):  
Natalya V. Ivanova ◽  
◽  
Maxim P. Shashkov ◽  
Vladimir N. Shanin ◽  
◽  
...  

Nowadays, due to the rapid development of lightweight unmanned aerial vehicles (UAV), remote sensing systems of ultra-high resolution have become available to many researchers. Conventional ground-based measurements for assessing tree stand attributes can be expensive, as well as time- and labor-consuming. Here, we assess whether remote sensing measurements with lightweight UAV can be more effective in comparison to ground survey methods in the case of temperate mixed forests. The study was carried out at the Prioksko-Terrasny Biosphere Nature Reserve (Moscow region, Russia). This area belongs to a coniferous-broad-leaved forest zone. Our field works were carried out on the permanent sampling plot of 1 ha (100×100 m) established in 2016. The coordinates of the plot center are N 54.88876°, E 37.56273° in the WGS 84 datum. All trees with DBH (diameter at breast height) of at least 6 cm (779 trees) were mapped and measured during the ground survey in 2016 (See Fig. 1 and Table 1). Mapping was performed with Laser Technology TruPulse 360B angle and a distance meter. First, polar coordinates of each tree trunk were measured, and then, after conversion to the cartesian coordinates, the scheme of the stand was validated onsite. Species and DBH were determined for each tree. For each living tree, we detected a social status class (according to Kraft). Also for living trees, we measured the tree height and the radii of the crown horizontal projection in four cardinal directions. A lightweight UAV Phantom 4 (DJI-Innovations, Shenzhen, China) equipped with an integrated camera of 12Mp sensor was used for aerial photography in this study. Technical parameters of the camera are available in Table 2. The aerial photography was conducted on October 12, 2017, from an altitude of 68 m. The commonly used mosaic flight mode was used with 90% overlapping both for side and front directions. We applied Agisoft Metashape software for orthophoto mosaic image and dense point cloud building. The canopy height model (CHM) was generated with lidR package in R. We used lasground() function and cloth simulation filter for classification of ground points. To create a normalized dataset with the ground at 0, we used spatial interpolation algorithm tin based on a Delaunay triangulation, which performs a linear interpolation within each triangle, implemented in the lasnormilise() function. CHM was generated according to the pit-free algorithm based on the computation of a set of classical triangulations at different heights. The location and height of individual trees were automatically detected by the function FindTreesCHM() from the package rLIDAR in R. The algorithm implemented in this function is local maximum with fixed window size. Accuracy assessment of automatically detected trees (in QGIS software) was performed through visual interpretation of orthophoto mosaic and comparison with ground survey data. The number of correctly detected trees, omitted by the algorithm and not existing but detected trees were counted. As a result of aerial photography, 501 images were obtained. During these data processing with the Metashape, dense point cloud of 163.7 points / m2 was generated. CHM with 0.5 m resolution was calculated. According to the individual-tree detection algorithm, 241 trees were found automatically (See Fig. 2A). The total accuracy of individual tree detection was 73.9%. Coniferous trees (Pinus sylvestris and Picea abies) were successfully detected (86.0% and 100%, respectively), while results for birch (Betula spp.) required additional treatment. The algorithm correctly detected only 58.2% of birch trees due to false-positive trees (See Fig. 2B and Table 3). These results confirm the published literature data obtained for managed tree stands. Tree heights retrieved from the UAV were well-matched to ground-based method results. The mean tree heights retrieved from the UAV and ground surveys were 25.0±4.8 m (min 8.2 m, max 32.9 m) and 25.3±5.2 m (min 5.9 m, max 34.0 m), respectively (no significant difference, p-value=0.049). Linear regression confirmed a strong relationship between the estimated and measured heights (y=k*x, R2 =0.99, k=0.98) (See Fig. 3A). Slightly larger differences in heights estimated by the two methods were found for birch and pine; for spruce, the differences were smaller (See Fig. 3B and Table 4). We believe that ground measurements of birch and pine height are less accurate than for spruce due to different crown shapes of these trees. So, our results suggested that UAV data can be used for tree stand attributes estimation, but automatically obtained data require validation.


Sign in / Sign up

Export Citation Format

Share Document