scholarly journals Mapping Forest Vertical Structure in Sogwang-ri Forest from Full-Waveform Lidar Point Clouds Using Deep Neural Network

2021 ◽  
Vol 13 (18) ◽  
pp. 3736
Author(s):  
Sung-Hwan Park ◽  
Hyung-Sup Jung ◽  
Sunmin Lee ◽  
Eun-Sook Kim

The role of forests is increasing because of rapid land use changes worldwide that have implications on ecosystems and the carbon cycle. Therefore, it is necessary to obtain accurate information about forests and build forest inventories. However, it is difficult to assess the internal structure of the forest through 2D remote sensing techniques and fieldwork. In this aspect, we proposed a method for estimating the vertical structure of forests based on full-waveform light detection and ranging (FW LiDAR) data in this study. Voxel-based tree point density maps were generated by estimating the number of canopy height points in each voxel grid from the raster digital terrain model (DTM) and canopy height points after pre-processing the LiDAR point clouds. We applied an unsupervised classification algorithm to the voxel-based tree point density maps and identified seven classes by profile pattern analysis for the forest vertical types. The classification accuracy was found to be 72.73% from the validation from 11 field investigation sites, which was additionally confirmed through comparative analysis with aerial images. Based on this pre-classification reference map, which is assumed to be ground truths, the deep neural network (DNN) model was finally applied to perform the final classification. As a result of accuracy assessment, it showed accuracy of 92.72% with a good performance. These results demonstrate the potential of vertical structure estimation for extensive forests using FW LiDAR data and that the distinction between one-storied and two-storied forests can be clearly represented. This technique is expected to contribute to efficient and effective management of forests based on accurate information derived from the proposed method.

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Wuming Zhang ◽  
Shangshu Cai ◽  
Xinlian Liang ◽  
Jie Shao ◽  
Ronghai Hu ◽  
...  

Abstract Background The universal occurrence of randomly distributed dark holes (i.e., data pits appearing within the tree crown) in LiDAR-derived canopy height models (CHMs) negatively affects the accuracy of extracted forest inventory parameters. Methods We develop an algorithm based on cloth simulation for constructing a pit-free CHM. Results The proposed algorithm effectively fills data pits of various sizes whilst preserving canopy details. Our pit-free CHMs derived from point clouds at different proportions of data pits are remarkably better than those constructed using other algorithms, as evidenced by the lowest average root mean square error (0.4981 m) between the reference CHMs and the constructed pit-free CHMs. Moreover, our pit-free CHMs show the best performance overall in terms of maximum tree height estimation (average bias = 0.9674 m). Conclusion The proposed algorithm can be adopted when working with different quality LiDAR data and shows high potential in forestry applications.


2021 ◽  
Vol 6 (4) ◽  
pp. 8647-8654
Author(s):  
Qi Wang ◽  
Jian Chen ◽  
Jianqiang Deng ◽  
Xinfang Zhang

Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3568 ◽  
Author(s):  
Takayuki Shinohara ◽  
Haoyi Xiu ◽  
Masashi Matsuoka

In the computer vision field, many 3D deep learning models that directly manage 3D point clouds (proposed after PointNet) have been published. Moreover, deep learning-based-techniques have demonstrated state-of-the-art performance for supervised learning tasks on 3D point cloud data, such as classification and segmentation tasks for open datasets in competitions. Furthermore, many researchers have attempted to apply these deep learning-based techniques to 3D point clouds observed by aerial laser scanners (ALSs). However, most of these studies were developed for 3D point clouds without radiometric information. In this paper, we investigate the possibility of using a deep learning method to solve the semantic segmentation task of airborne full-waveform light detection and ranging (lidar) data that consists of geometric information and radiometric waveform data. Thus, we propose a data-driven semantic segmentation model called the full-waveform network (FWNet), which handles the waveform of full-waveform lidar data without any conversion process, such as projection onto a 2D grid or calculating handcrafted features. Our FWNet is based on a PointNet-based architecture, which can extract the local and global features of each input waveform data, along with its corresponding geographical coordinates. Subsequently, the classifier consists of 1D convolutional operational layers, which predict the class vector corresponding to the input waveform from the extracted local and global features. Our trained FWNet achieved higher scores in its recall, precision, and F1 score for unseen test data—higher scores than those of previously proposed methods in full-waveform lidar data analysis domain. Specifically, our FWNet achieved a mean recall of 0.73, a mean precision of 0.81, and a mean F1 score of 0.76. We further performed an ablation study, that is assessing the effectiveness of our proposed method, of the above-mentioned metric. Moreover, we investigated the effectiveness of our PointNet based local and global feature extraction method using the visualization of the feature vector. In this way, we have shown that our network for local and global feature extraction allows training with semantic segmentation without requiring expert knowledge on full-waveform lidar data or translation into 2D images or voxels.


Author(s):  
G. Tran ◽  
D. Nguyen ◽  
M. Milenkovic ◽  
N. Pfeifer

Full-waveform (FWF) LiDAR (Light Detection and Ranging) systems have their advantage in recording the entire backscattered signal of each emitted laser pulse compared to conventional airborne discrete-return laser scanner systems. The FWF systems can provide point clouds which contain extra attributes like amplitude and echo width, etc. In this study, a FWF data collected in 2010 for Eisenstadt, a city in the eastern part of Austria was used to classify four main classes: buildings, trees, waterbody and ground by employing a decision tree. Point density, echo ratio, echo width, normalised digital surface model and point cloud roughness are the main inputs for classification. The accuracy of the final results, correctness and completeness measures, were assessed by comparison of the classified output to a knowledge-based labelling of the points. Completeness and correctness between 90% and 97% was reached, depending on the class. While such results and methods were presented before, we are investigating additionally the transferability of the classification method (features, thresholds …) to another urban FWF lidar point cloud. Our conclusions are that from the features used, only echo width requires new thresholds. A data-driven adaptation of thresholds is suggested.


Author(s):  
S. Briechle ◽  
P. Krzystek ◽  
G. Vosselman

Abstract. Knowledge of tree species mapping and of dead wood in particular is fundamental to managing our forests. Although individual tree-based approaches using lidar can successfully distinguish between deciduous and coniferous trees, the classification of multiple tree species is still limited in accuracy. Moreover, the combined mapping of standing dead trees after pest infestation is becoming increasingly important. New deep learning methods outperform baseline machine learning approaches and promise a significant accuracy gain for tree mapping. In this study, we performed a classification of multiple tree species (pine, birch, alder) and standing dead trees with crowns using the 3D deep neural network (DNN) PointNet++ along with UAV-based lidar data and multispectral (MS) imagery. Aside from 3D geometry, we also integrated laser echo pulse width values and MS features into the classification process. In a preprocessing step, we generated the 3D segments of single trees using a 3D detection method. Our approach achieved an overall accuracy (OA) of 90.2% and was clearly superior to a baseline method using a random forest classifier and handcrafted features (OA = 85.3%). All in all, we demonstrate that the performance of the 3D DNN is highly promising for the classification of multiple tree species and standing dead trees in practice.


2019 ◽  
Vol 79 (47-48) ◽  
pp. 35503-35518 ◽  
Author(s):  
Huafeng Liu ◽  
Yazhou Yao ◽  
Zeren Sun ◽  
Xiangrui Li ◽  
Ke Jia ◽  
...  

Author(s):  
Mahmudul Hasan ◽  
Riku Goto ◽  
Junichi Hanawa ◽  
Hisato Fukuda ◽  
Yoshinori Kuno ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4734
Author(s):  
Patryk Mazurek ◽  
Tomasz Hachaj

In this paper, we propose a novel approach that enables simultaneous localization, mapping (SLAM) and objects recognition using visual sensors data in open environments that is capable to work on sparse data point clouds. In the proposed algorithm the ORB-SLAM uses the current and previous monocular visual sensors video frame to determine observer position and to determine a cloud of points that represent objects in the environment, while the deep neural network uses the current frame to detect and recognize objects (OR). In the next step, the sparse point cloud returned from the SLAM algorithm is compared with the area recognized by the OR network. Because each point from the 3D map has its counterpart in the current frame, therefore the filtration of points matching the area recognized by the OR algorithm is performed. The clustering algorithm determines areas in which points are densely distributed in order to detect spatial positions of objects detected by OR. Then by using principal component analysis (PCA)—based heuristic we estimate bounding boxes of detected objects. The image processing pipeline that uses sparse point clouds generated by SLAM in order to determine positions of objects recognized by deep neural network and mentioned PCA heuristic are main novelties of our solution. In contrary to state-of-the-art approaches, our algorithm does not require any additional calculations like generation of dense point clouds for objects positioning, which highly simplifies the task. We have evaluated our research on large benchmark dataset using various state-of-the-art OR architectures (YOLO, MobileNet, RetinaNet) and clustering algorithms (DBSCAN and OPTICS) obtaining promising results. Both our source codes and evaluation data sets are available for download, so our results can be easily reproduced.


Sign in / Sign up

Export Citation Format

Share Document