scholarly journals Real-Time Plane Detection with Consistency from Point Cloud Sequences

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 140
Author(s):  
Jinxuan Xu ◽  
Qian Xie ◽  
Honghua Chen ◽  
Jun Wang

Real-time consistent plane detection (RCPD) from structured point cloud sequences facilitates various high-level computer vision and robotic tasks. However, it remains a challenge. Existing techniques for plane detection suffer from a long running time or the problem that the plane detection result is not precise. Meanwhile, labels of planes are not consistent over the whole image sequence due to plane loss in the detection stage. In order to resolve these issues, we propose a novel superpixel-based real-time plane detection approach, while keeping their consistencies over frames simultaneously. In summary, our method has the following key contributions: (i) a real-time plane detection algorithm to extract planes from raw structured three-dimensional (3D) point clouds collected by depth sensors; (ii) a superpixel-based segmentation method to make the detected plane exactly match its actual boundary; and, (iii) a robust strategy to recover the missing planes by utilizing the contextual correspondences information in adjacent frames. Extensive visual and numerical experiments demonstrate that our method outperforms state-of-the-art methods in terms of efficiency and accuracy.

2021 ◽  
Vol 11 (13) ◽  
pp. 5941
Author(s):  
Mun-yong Lee ◽  
Sang-ha Lee ◽  
Kye-dong Jung ◽  
Seung-hyun Lee ◽  
Soon-chul Kwon

Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Seoungjae Cho ◽  
Jonghyun Kim ◽  
Warda Ikram ◽  
Kyungeun Cho ◽  
Young-Sik Jeong ◽  
...  

A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2021 ◽  
Vol 13 (24) ◽  
pp. 5071
Author(s):  
Jing Zhang ◽  
Jiajun Wang ◽  
Da Xu ◽  
Yunsong Li

The use of LiDAR point clouds for accurate three-dimensional perception is crucial for realizing high-level autonomous driving systems. Upon considering the drawbacks of the current point cloud object-detection algorithms, this paper proposes HCNet, an algorithm that combines an attention mechanism with adaptive adjustment, starting from feature fusion and overcoming the sparse and uneven distribution of point clouds. Inspired by the basic idea of an attention mechanism, a feature-fusion structure HC module with height attention and channel attention, weighted in parallel, is proposed to perform feature-fusion on multiple pseudo images. The use of several weighting mechanisms enhances the ability of feature-information expression. Additionally, we designed an adaptively adjusted detection head that also overcomes the sparsity of the point cloud from the perspective of original information fusion. It reduces the interference caused by the uneven distribution of the point cloud from the perspective of adaptive adjustment. The results show that our HCNet has better accuracy than other one-stage-network or even two-stage-network RCNNs under some evaluation detection metrics. Additionally, it has a detection rate of 30FPS. Especially for hard samples, the algorithm in this paper has better detection performance than many existing algorithms.


Author(s):  
E. S. Malinverni ◽  
R. Pierdicca ◽  
M. Paolanti ◽  
M. Martini ◽  
C. Morbidoni ◽  
...  

<p><strong>Abstract.</strong> Cultural Heritage is a testimony of past human activity, and, as such, its objects exhibit great variety in their nature, size and complexity; from small artefacts and museum items to cultural landscapes, from historical building and ancient monuments to city centers and archaeological sites. Cultural Heritage around the globe suffers from wars, natural disasters and human negligence. The importance of digital documentation is well recognized and there is an increasing pressure to document our heritage both nationally and internationally. For this reason, the three-dimensional scanning and modeling of sites and artifacts of cultural heritage have remarkably increased in recent years. The semantic segmentation of point clouds is an essential step of the entire pipeline; in fact, it allows to decompose complex architectures in single elements, which are then enriched with meaningful information within Building Information Modelling software. Notwithstanding, this step is very time consuming and completely entrusted on the manual work of domain experts, far from being automatized. This work describes a method to label and cluster automatically a point cloud based on a supervised Deep Learning approach, using a state-of-the-art Neural Network called PointNet++. Despite other methods are known, we have choose PointNet++ as it reached significant results for classifying and segmenting 3D point clouds. PointNet++ has been tested and improved, by training the network with annotated point clouds coming from a real survey and to evaluate how performance changes according to the input training data. It can result of great interest for the research community dealing with the point cloud semantic segmentation, since it makes public a labelled dataset of CH elements for further tests.</p>


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3681 ◽  
Author(s):  
Le Zhang ◽  
Jian Sun ◽  
Qiang Zheng

The recognition of three-dimensional (3D) lidar (light detection and ranging) point clouds remains a significant issue in point cloud processing. Traditional point cloud recognition employs the 3D point clouds from the whole object. Nevertheless, the lidar data is a collection of two-and-a-half-dimensional (2.5D) point clouds (each 2.5D point cloud comes from a single view) obtained by scanning the object within a certain field angle by lidar. To deal with this problem, we initially propose a novel representation which expresses 3D point clouds using 2.5D point clouds from multiple views and then we generate multi-view 2.5D point cloud data based on the Point Cloud Library (PCL). Subsequently, we design an effective recognition model based on a multi-view convolutional neural network. The model directly acts on the raw 2.5D point clouds from all views and learns to get a global feature descriptor by fusing the features from all views by the view fusion network. It has been proved that our approach can achieve an excellent recognition performance without any requirement for three-dimensional reconstruction and the preprocessing of point clouds. In conclusion, this paper can effectively solve the recognition problem of lidar point clouds and provide vital practical value.


2021 ◽  
Vol 13 (12) ◽  
pp. 2332
Author(s):  
Daniel Lamas ◽  
Mario Soilán ◽  
Javier Grandío ◽  
Belén Riveiro

The growing development of data digitalisation methods has increased their demand and applications in the transportation infrastructure field. Currently, mobile mapping systems (MMSs) are one of the most popular technologies for the acquisition of infrastructure data, with three-dimensional (3D) point clouds as their main product. In this work, a heuristic-based workflow for semantic segmentation of complex railway environments is presented, in which their most relevant elements are classified, namely, rails, masts, wiring, droppers, traffic lights, and signals. This method takes advantage of existing methodologies in the field for point cloud processing and segmentation, taking into account the geometry and spatial context of each classified element in the railway environment. This method is applied to a 90-kilometre-long railway lane and validated against a manual reference on random sections of the case study data. The results are presented and discussed at the object level, differentiating the type of the element. The indicators F1 scores obtained for each element are superior to 85%, being higher than 99% in rails, the most significant element of the infrastructure. These metrics showcase the quality of the algorithm, which proves that this method is efficient for the classification of long and variable railway sections, and for the assisted labelling of point cloud data for future applications based on training supervised learning models.


2019 ◽  
Vol 13 (4) ◽  
pp. 464-474
Author(s):  
Shinichi Sumiyoshi ◽  
◽  
Yuichi Yoshida

While several methods have been proposed for detecting three-dimensional (3D) objects in semi-real time by sparsely acquiring features from 3D point clouds, the detection of strongly occluded objects still poses difficulties. Herein, we propose a method of detecting strongly occluded objects by setting up virtual auxiliary point clouds in the vicinity of the target object. By generating auxiliary point clouds only in the occluded space estimated from a detected object at the front of the sensor-observed region, i.e., the occluder, the processing efficiency and accuracy are improved. Experiments are performed with various strongly occluded scenes based on real environmental data, and the results confirm that the proposed method is capable of achieving a mean processing time of 0.5 s for detecting strongly occluded objects.


Author(s):  
M. Zaboli ◽  
H. Rastiveis ◽  
A. Shams ◽  
B. Hosseiny ◽  
W. A. Sarasua

Abstract. Automated analysis of three-dimensional (3D) point clouds has become a boon in Photogrammetry, Remote Sensing, Computer Vision, and Robotics. The aim of this paper is to compare classifying algorithms tested on an urban area point cloud acquired by a Mobile Terrestrial Laser Scanning (MTLS) system. The algorithms were tested based on local geometrical and radiometric descriptors. In this study, local descriptors such as linearity, planarity, intensity, etc. are initially extracted for each point by observing their neighbor points. These features are then imported to a classification algorithm to automatically label each point. Here, five powerful classification algorithms including k-Nearest Neighbors (k-NN), Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Multilayer Perceptron (MLP) Neural Network, and Random Forest (RF) are tested. Eight semantic classes are considered for each method in an equal condition. The best overall accuracy of 90% was achieved with the RF algorithm. The results proved the reliability of the applied descriptors and RF classifier for MTLS point cloud classification.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3214 ◽  
Author(s):  
Zhipeng Dong ◽  
Yi Gao ◽  
Jinfeng Zhang ◽  
Yunhui Yan ◽  
Xin Wang ◽  
...  

Extracting horizontal planes in heavily cluttered three-dimensional (3D) scenes is an essential procedure for many robotic applications. Aiming at the limitations of general plane segmentation methods on this subject, we present HoPE, a Horizontal Plane Extractor that is able to extract multiple horizontal planes in cluttered scenes with both organized and unorganized 3D point clouds. It transforms the source point cloud in the first stage to the reference coordinate frame using the sensor orientation acquired either by pre-calibration or an inertial measurement unit, thereby leveraging the inner structure of the transformed point cloud to ease the subsequent processes that use two concise thresholds for producing the results. A revised region growing algorithm named Z clustering and a principal component analysis (PCA)-based approach are presented for point clustering and refinement, respectively. Furthermore, we provide a nearest neighbor plane matching (NNPM) strategy to preserve the identities of extracted planes across successive sequences. Qualitative and quantitative evaluations of both real and synthetic scenes demonstrate that our approach outperforms several state-of-the-art methods under challenging circumstances, in terms of robustness to clutter, accuracy, and efficiency. We make our algorithm an off-the-shelf toolbox which is publicly available.


Sign in / Sign up

Export Citation Format

Share Document