scholarly journals INVESTIGATION OF POINTNET FOR SEMANTIC SEGMENTATION OF LARGE-SCALE OUTDOOR POINT CLOUDS

Author(s):  
A. Nurunnabi ◽  
F. N. Teferle ◽  
J. Li ◽  
R. C. Lindenbergh ◽  
S. Parvaz

Abstract. Semantic segmentation of point clouds is indispensable for 3D scene understanding. Point clouds have credibility for capturing geometry of objects including shape, size, and orientation. Deep learning (DL) has been recognized as the most successful approach for image semantic segmentation. Applied to point clouds, performance of the many DL algorithms degrades, because point clouds are often sparse and have irregular data format. As a result, point clouds are regularly first transformed into voxel grids or image collections. PointNet was the first promising algorithm that feeds point clouds directly into the DL architecture. Although PointNet achieved remarkable performance on indoor point clouds, its performance has not been extensively studied in large-scale outdoor point clouds. So far, we know, no study on large-scale aerial point clouds investigates the sensitivity of the hyper-parameters used in the PointNet. This paper evaluates PointNet’s performance for semantic segmentation through three large-scale Airborne Laser Scanning (ALS) point clouds of urban environments. Reported results show that PointNet has potential in large-scale outdoor scene semantic segmentation. A remarkable limitation of PointNet is that it does not consider local structure induced by the metric space made by its local neighbors. Experiments exhibit PointNet is expressively sensitive to the hyper-parameters like batch-size, block partition and the number of points in a block. For an ALS dataset, we get significant difference between overall accuracies of 67.5% and 72.8%, for the block sizes of 5m × 5m and 10m × 10m, respectively. Results also discover that the performance of PointNet depends on the selection of input vectors.

Author(s):  
J. Gehrung ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

Mobile laser scanning has not only the potential to create detailed representations of urban environments, but also to determine changes up to a very detailed level. An environment representation for change detection in large scale urban environments based on point clouds has drawbacks in terms of memory scalability. Volumes, however, are a promising building block for memory efficient change detection methods. The challenge of working with 3D occupancy grids is that the usual raycasting-based methods applied for their generation lead to artifacts caused by the traversal of unfavorable discretized space. These artifacts have the potential to distort the state of voxels in close proximity to planar structures. In this work we propose a raycasting approach that utilizes knowledge about planar surfaces to completely prevent this kind of artifacts. To demonstrate the capabilities of our approach, a method for the iterative volumetric approximation of point clouds that allows to speed up the raycasting by 36 percent is proposed.


Author(s):  
Jian Wu ◽  
Qingxiong Yang

In this paper, we study the semantic segmentation of 3D LiDAR point cloud data in urban environments for autonomous driving, and a method utilizing the surface information of the ground plane was proposed. In practice, the resolution of a LiDAR sensor installed in a self-driving vehicle is relatively low and thus the acquired point cloud is indeed quite sparse. While recent work on dense point cloud segmentation has achieved promising results, the performance is relatively low when directly applied to sparse point clouds. This paper is focusing on semantic segmentation of the sparse point clouds obtained from 32-channel LiDAR sensor with deep neural networks. The main contribution is the integration of the ground information which is used to group ground points far away from each other. Qualitative and quantitative experiments on two large-scale point cloud datasets show that the proposed method outperforms the current state-of-the-art.


Author(s):  
S. Schmohl ◽  
U. Sörgel

<p><strong>Abstract.</strong> Semantic segmentation of point clouds is one of the main steps in automated processing of data from Airborne Laser Scanning (ALS). Established methods usually require expensive calculation of handcrafted, point-wise features. In contrast, Convolutional Neural Networks (CNNs) have been established as powerful classifiers, which at the same time also learn a set of features by themselves. However, their application to ALS data is not trivial. Pure 3D CNNs require a lot of memory and computing time, therefore most related approaches project ALS point clouds into two-dimensional images. Sparse Submanifold Convolutional Networks (SSCNs) address this issue by exploiting the sparsity often inherent in 3D data. In this work, we propose the application of SSCNs for efficient semantic segmentation of voxelized ALS point clouds in an end-to-end encoder-decoder architecture. We evaluate this method on the ISPRS Vaihingen 3D Semantic Labeling benchmark and achieve state-of-the-art 85.0% overall accuracy. Furthermore, we demonstrate its capabilities regarding large-scale ALS data by classifying a 2.5&amp;thinsp;km<sup>2</sup> subset containing 41&amp;thinsp;M points from the Actueel Hoogtebestand Nederland (AHN3) with 95% overall accuracy in just 48&amp;thinsp;s inference time or with 96% in 108&amp;thinsp;s.</p>


Author(s):  
Joachim Gehrung ◽  
Marcus Hebel ◽  
Michael Arens ◽  
Uwe Stilla

The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.


Author(s):  
F. Li ◽  
S. Oude Elberink ◽  
G. Vosselman

Road furniture semantic labelling is vital for large scale mapping and autonomous driving systems. Much research has been investigated on road furniture interpretation in both 2D images and 3D point clouds. Precise interpretation of road furniture in mobile laser scanning data still remains unexplored. In this paper, a novel method is proposed to interpret road furniture based on their logical relations and functionalities. Our work represents the most detailed interpretation of road furniture in mobile laser scanning data. 93.3&amp;thinsp;% of poles are correctly extracted and all of them are correctly recognised. 94.3&amp;thinsp;% of street light heads are detected and 76.9&amp;thinsp;% of them are correctly identified. Despite errors arising from the recognition of other components, our framework provides a promising solution to automatically map road furniture at a detailed level in urban environments.


Author(s):  
S. Tanaka ◽  
K. Hasegawa ◽  
N. Okamoto ◽  
R. Umegaki ◽  
S. Wang ◽  
...  

We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 10&lt;sup&gt;7&lt;/sup&gt; or 10&lt;sup&gt;8&lt;/sup&gt; 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.


Author(s):  
S. Tanaka ◽  
K. Hasegawa ◽  
N. Okamoto ◽  
R. Umegaki ◽  
S. Wang ◽  
...  

We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 10<sup>7</sup> or 10<sup>8</sup> 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.


Forests ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 835
Author(s):  
Ville Luoma ◽  
Tuomas Yrttimaa ◽  
Ville Kankare ◽  
Ninni Saarinen ◽  
Jiri Pyörälä ◽  
...  

Tree growth is a multidimensional process that is affected by several factors. There is a continuous demand for improved information on tree growth and the ecological traits controlling it. This study aims at providing new approaches to improve ecological understanding of tree growth by the means of terrestrial laser scanning (TLS). Changes in tree stem form and stem volume allocation were investigated during a five-year monitoring period. In total, a selection of attributes from 736 trees from 37 sample plots representing different forest structures were extracted from taper curves derived from two-date TLS point clouds. The results of this study showed the capability of point cloud-based methods in detecting changes in the stem form and volume allocation. In addition, the results showed a significant difference between different forest structures in how relative stem volume and logwood volume increased during the monitoring period. Along with contributing to providing more accurate information for monitoring purposes in general, the findings of this study showed the ability and many possibilities of point cloud-based method to characterize changes in living organisms in particular, which further promote the feasibility of using point clouds as an observation method also in ecological studies.


2022 ◽  
Vol 193 ◽  
pp. 106653
Author(s):  
Hejun Wei ◽  
Enyong Xu ◽  
Jinlai Zhang ◽  
Yanmei Meng ◽  
Jin Wei ◽  
...  

2019 ◽  
Vol 11 (12) ◽  
pp. 1453 ◽  
Author(s):  
Shanxin Zhang ◽  
Cheng Wang ◽  
Lili Lin ◽  
Chenglu Wen ◽  
Chenhui Yang ◽  
...  

Maintaining the high visual recognizability of traffic signs for traffic safety is a key matter for road network management. Mobile Laser Scanning (MLS) systems provide efficient way of 3D measurement over large-scale traffic environment. This paper presents a quantitative visual recognizability evaluation method for traffic signs in large-scale traffic environment based on traffic recognition theory and MLS 3D point clouds. We first propose the Visibility Evaluation Model (VEM) to quantitatively describe the visibility of traffic sign from any given viewpoint, then we proposed the concept of visual recognizability field and Traffic Sign Visual Recognizability Evaluation Model (TSVREM) to measure the visual recognizability of a traffic sign. Finally, we present an automatic TSVREM calculation algorithm for MLS 3D point clouds. Experimental results on real MLS 3D point clouds show that the proposed method is feasible and efficient.


Sign in / Sign up

Export Citation Format

Share Document