scholarly journals Polylidar3D-Fast Polygon Extraction from 3D Data

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4819
Author(s):  
Jeremy Castagno ◽  
Ella Atkins

Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D’s versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy.

Author(s):  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> In the last years we are witnessing an increasing quality (and quantity) of video streams and a growing capability of SLAM-based methods to derive 3D data from video. Video sequences can be easily acquired by non-expert surveyors and possibly used for 3D documentation purposes. The aim of the paper is to evaluate the possibility to perform 3D reconstructions of heritage scenarios using videos ("videogrammetry"), e.g. acquired with smartphones. Video frames are extracted from the sequence using a fixed-time interval and two advanced methods. Frames are then processed applying automated image orientation / Structure from Motion (SfM) and dense image matching / Multi-View Stereo (MVS) methods. Obtained 3D dense point clouds are the visually validated as well as compared with photogrammetric ground truth archived acquiring image with a reflex camera or analysing 3D data's noise on flat surfaces.</p>


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1205
Author(s):  
Zhiyu Wang ◽  
Li Wang ◽  
Bin Dai

Object detection in 3D point clouds is still a challenging task in autonomous driving. Due to the inherent occlusion and density changes of the point cloud, the data distribution of the same object will change dramatically. Especially, the incomplete data with sparsity or occlusion can not represent the complete characteristics of the object. In this paper, we proposed a novel strong–weak feature alignment algorithm between complete and incomplete objects for 3D object detection, which explores the correlations within the data. It is an end-to-end adaptive network that does not require additional data and can be easily applied to other object detection networks. Through a complete object feature extractor, we achieve a robust feature representation of the object. It serves as a guarding feature to help the incomplete object feature generator to generate effective features. The strong–weak feature alignment algorithm reduces the gap between different states of the same object and enhances the ability to represent the incomplete object. The proposed adaptation framework is validated on the KITTI object benchmark and gets about 6% improvement in detection average precision on 3D moderate difficulty compared to the basic model. The results show that our adaptation method improves the detection performance of incomplete 3D objects.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
E. Grilli ◽  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>


Author(s):  
Grzegorz Gabara ◽  
Piotr Sawicki

The image based point clouds generated from multiple different oriented photos enable 3D object reconstruction in a variety spectrum of close range applications. The paper presents the results of testing the accuracy the image based point clouds generated in disadvantageous conditions of digital photogrammetric data processing. The subject of the study was a long shaped object, i.e. the horizontal and rectilinear section of the railway track. DSLR Nikon D5100 camera, 16MP, equipped with the zoom lens (f = 18 ÷ 55mm), was used to acquire the block of terrestrial convergent and very oblique photos at different scales, with the full longitudinal overlap. The point clouds generated from digital images, automatic determination of the interior orientation parameters, the spatial orientation of photos and 3D distribution of discrete points were obtained using the successively tested software: RealityCapture, Photoscan, VisualSFM+SURE and iWitness+SURE. The dense point clouds of the test object generated with the use of RealityCapture and PhotoScan applications were filtered using MeshLab application. The geometric parameters of test object were determined by means of CloudCompare software. The image based dense point clouds allow, in the case of disadvantageous conditions of photogrammetric digital data processing, to determine the geometric parameters of a close range elongated object with the high accuracy (mXYZ &lt; 1 mm).


Author(s):  
A. Kharroubi ◽  
R. Hajji ◽  
R. Billen ◽  
F. Poux

Abstract. With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.


Author(s):  
Shengbo Liu ◽  
Pengyuan Fu ◽  
Lei Yan ◽  
Jian Wu ◽  
Yandong Zhao

Deep learning classification based on 3D point clouds has gained considerable research interest in recent years.The classification and quantitative analysis of wood defects are of great significance to the wood processing industry. In order to solve the problems of slow processing and low robustness of 3D data. This paper proposes an improvement based on littlepoint CNN lightweight deep learning network, adding BN layer. And based on the data set made by ourselves, the test is carried out. The new network bnlittlepoint CNN has been improved in speed and recognition rate. The correct rate of recognition for non defect log, non defect log and defect log as well as defect knot and dead knot can reach 95.6%.Finally, the "dead knot" and "loose knot" are quantitatively analyzed based on the "integral" idea, and the volume and surface area of the defect are obtained to a certain extent,the error is not more than 1.5% and the defect surface reconstruction is completed based on the triangulation idea.


2020 ◽  
Vol 34 (01) ◽  
pp. 954-962 ◽  
Author(s):  
Tzungyu Tsai ◽  
Kaichen Yang ◽  
Tsung-Yi Ho ◽  
Yier Jin

Previous work has shown that Deep Neural Networks (DNNs), including those currently in use in many fields, are extremely vulnerable to maliciously crafted inputs, known as adversarial examples. Despite extensive and thorough research of adversarial examples in many areas, adversarial 3D data, such as point clouds, remain comparatively unexplored. The study of adversarial 3D data is crucial considering its impact in real-life, high-stakes scenarios including autonomous driving. In this paper, we propose a novel adversarial attack against PointNet++, a deep neural network that performs classification and segmentation tasks using features learned directly from raw 3D points. In comparison to existing works, our attack generates not only adversarial point clouds, but also robust adversarial objects that in turn generate adversarial point clouds when sampled both in simulation and after construction in real world. We also demonstrate that our objects can bypass existing defense mechanisms designed especially against adversarial 3D data.


Author(s):  
F. He ◽  
A. Habib ◽  
A. Al-Rawabdeh

In this paper, we proposed a new refinement procedure for the semi-global dense image matching. In order to remove outliers and improve the disparity image derived from the semi-global algorithm, both the local smoothness constraint and point cloud segments are utilized. Compared with current refinement technique, which usually assumes the correspondences between planar surfaces and 2D image segments, our proposed approach can effectively deal with object with both planar and curved surfaces. Meanwhile, since 3D point clouds contain more precise geometric information regarding to the reconstructed objects, the planar surfaces identified in our approach can be more accurate. In order to illustrate the feasibility of our approach, several experimental tests are conducted on both Middlebury test and real UAV-image datasets. The results demonstrate that our approach has a good performance on improving the quality of the derived dense image-based point cloud.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4329 ◽  
Author(s):  
Guorong Cai ◽  
Zuning Jiang ◽  
Zongyue Wang ◽  
Shangfeng Huang ◽  
Kai Chen ◽  
...  

Semantic segmentation of 3D point clouds plays a vital role in autonomous driving, 3D maps, and smart cities, etc. Recent work such as PointSIFT shows that spatial structure information can improve the performance of semantic segmentation. Motivated by this phenomenon, we propose Spatial Aggregation Net (SAN) for point cloud semantic segmentation. SAN is based on multi-directional convolution scheme that utilizes the spatial structure information of point cloud. Firstly, Octant-Search is employed to capture the neighboring points around each sampled point. Secondly, we use multi-directional convolution to extract information from different directions of sampled points. Finally, max-pooling is used to aggregate information from different directions. The experimental results conducted on ScanNet database show that the proposed SAN has comparable results with state-of-the-art algorithms such as PointNet, PointNet++, and PointSIFT, etc. In particular, our method has better performance on flat, small objects, and the edge areas that connect objects. Moreover, our model has good trade-off in segmentation accuracy and time complexity.


Sign in / Sign up

Export Citation Format

Share Document