scholarly journals ECPC-ICP: A 6D Vehicle Pose Estimation Method by Fusing the Roadside Lidar Point Cloud and Road Feature

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3489
Author(s):  
Bo Gu ◽  
Jianxun Liu ◽  
Huiyuan Xiong ◽  
Tongtong Li ◽  
Yuelong Pan

In the vehicle pose estimation task based on roadside Lidar in cooperative perception, the measurement distance, angle, and laser resolution directly affect the quality of the target point cloud. For incomplete and sparse point clouds, current methods are either less accurate in correspondences solved by local descriptors or not robust enough due to the reduction of effective boundary points. In response to the above weakness, this paper proposed a registration algorithm Environment Constraint Principal Component-Iterative Closest Point (ECPC-ICP), which integrated road information constraints. The road normal feature was extracted, and the principal component of the vehicle point cloud matrix under the road normal constraint was calculated as the initial pose result. Then, an accurate 6D pose was obtained through point-to-point ICP registration. According to the measurement characteristics of the roadside Lidars, this paper defined the point cloud sparseness description. The existing algorithms were tested on point cloud data with different sparseness. The simulated experimental results showed that the positioning MAE of ECPC-ICP was about 0.5% of the vehicle scale, the orientation MAE was about 0.26°, and the average registration success rate was 95.5%, which demonstrated an improvement in accuracy and robustness compared with current methods. In the real test environment, the positioning MAE was about 2.6% of the vehicle scale, and the average time cost was 53.19 ms, proving the accuracy and effectiveness of ECPC-ICP in practical applications.

2021 ◽  
Author(s):  
Lun H. Mark

This thesis investigates how geometry of complex objects is related to LIDAR scanning with the Iterative Closest Point (ICP) pose estimation and provides statistical means to assess the pose accuracy. LIDAR scanners have become essential parts of space vision systems for autonomous docking and rendezvous. Principal Componenet Analysis based geometric constraint indices have been found to be strongly related to the pose error norm and the error of each individual degree of freedom. This leads to the development of several strategies for identifying the best view of an object and the optimal combination of localized scanned areas of the object's surface to achieve accurate pose estimation. Also investigated is the possible relation between the ICP pose estimation accuracy and the districution or allocation of the point cloud. The simulation results were validated using point clouds generated by scanning models of Quicksat and a cuboctahedron using Neptec's TriDAR scanner.


2021 ◽  
Author(s):  
Lun H. Mark

This thesis investigates how geometry of complex objects is related to LIDAR scanning with the Iterative Closest Point (ICP) pose estimation and provides statistical means to assess the pose accuracy. LIDAR scanners have become essential parts of space vision systems for autonomous docking and rendezvous. Principal Componenet Analysis based geometric constraint indices have been found to be strongly related to the pose error norm and the error of each individual degree of freedom. This leads to the development of several strategies for identifying the best view of an object and the optimal combination of localized scanned areas of the object's surface to achieve accurate pose estimation. Also investigated is the possible relation between the ICP pose estimation accuracy and the districution or allocation of the point cloud. The simulation results were validated using point clouds generated by scanning models of Quicksat and a cuboctahedron using Neptec's TriDAR scanner.


2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 172 ◽  
Author(s):  
Chunxiao Wang ◽  
Min Ji ◽  
Jian Wang ◽  
Wei Wen ◽  
Ting Li ◽  
...  

Point cloud data segmentation, filtering, classification, and feature extraction are the main focus of point cloud data processing. DBSCAN (density-based spatial clustering of applications with noise) is capable of detecting arbitrary shapes of clusters in spaces of any dimension, and this method is very suitable for LiDAR (Light Detection and Ranging) data segmentation. The DBSCAN method needs at least two parameters: The minimum number of points minPts, and the searching radius ε. However, the parameter ε is often harder to determine, which hinders the application of the DBSCAN method in point cloud segmentation. Therefore, a segmentation algorithm based on DBSCAN is proposed with a novel automatic parameter ε estimation method—Estimation Method based on the average of k nearest neighbors’ maximum distance—with which parameter ε can be calculated on the intrinsic properties of the point cloud data. The method is based on the fitting curve of k and the mean maximum distance. The method was evaluated on different types of point cloud data: Airborne, and mobile point cloud data with and without color information. The results show that the accuracy values using ε estimated by the proposed method are 75%, 74%, and 71%, which are higher than those using parameters that are smaller or greater than the estimated one. The results demonstrate that the proposed algorithm can segment different types of LiDAR point clouds with higher accuracy in a robust manner. The algorithm can be applied to airborne and mobile LiDAR point cloud data processing systems, which can reduce manual work and improve the automation of data processing.


Author(s):  
F. Politz ◽  
M. Sester

<p><strong>Abstract.</strong> National mapping agencies (NMAs) have to acquire nation-wide Digital Terrain Models on a regular basis as part of their obligations to provide up-to-date data. Point clouds from Airborne Laser Scanning (ALS) are an important data source for this task; recently, NMAs also started deriving Dense Image Matching (DIM) point clouds from aerial images. As a result, NMAs have both point cloud data sources available, which they can exploit for their purposes. In this study, we investigate the potential of transfer learning from ALS to DIM data, so the time consuming step of data labelling can be reduced. Due to their specific individual measurement techniques, both point clouds have various distinct properties such as RGB or intensity values, which are often exploited for classification of either ALS or DIM point clouds. However, those features also hinder transfer learning between these two point cloud types, since they do not exist in the other point cloud type. As the mere 3D point is available in both point cloud types, we focus on transfer learning from an ALS to a DIM point cloud using exclusively the point coordinates. We are tackling the issue of different point densities by rasterizing the point cloud into a 2D grid and take important height features as input for classification. We train an encoder-decoder convolutional neural network with labelled ALS data as a baseline and then fine-tune this baseline with an increasing amount of labelled DIM data. We also train the same network exclusively on all available DIM data as reference to compare our results. We show that only 10% of labelled DIM data increase the classification results notably, which is especially relevant for practical applications.</p>


Author(s):  
Zihao Zhang ◽  
Lei Hu ◽  
Xiaoming Deng ◽  
Shihong Xia

3D human pose estimation is a fundamental problem in artificial intelligence, and it has wide applications in AR/VR, HCI and robotics. However, human pose estimation from point clouds still suffers from noisy points and estimated jittery artifacts because of handcrafted-based point cloud sampling and single-frame-based estimation strategies. In this paper, we present a new perspective on the 3D human pose estimation method from point cloud sequences. To sample effective point clouds from input, we design a differentiable point cloud sampling method built on density-guided attention mechanism. To avoid the jitter caused by previous 3D human pose estimation problems, we adopt temporal information to obtain more stable results. Experiments on the ITOP dataset and the NTU-RGBD dataset demonstrate that all of our contributed components are effective, and our method can achieve state-of-the-art performance.


2021 ◽  
Vol 13 (13) ◽  
pp. 2612
Author(s):  
Lianbi Yao ◽  
Changcai Qin ◽  
Qichao Chen ◽  
Hangbin Wu

Automatic driving technology is becoming one of the main areas of development for future intelligent transportation systems. The high-precision map, which is an important supplement of the on-board sensors during shielding or limited observation distance, provides a priori information for high-precision positioning and path planning in automatic driving. The position and semantic information of the road markings, such as absolute coordinates of the solid lines and dashed lines, are the basic components of the high-precision map. In this paper, we study the automatic extraction and vectorization of road markings. Firstly, scan lines are extracted from the vehicle-borne laser point cloud data, and the pavement is extracted from scan lines according to the geometric mutation at the road boundary. On this basis, the pavement point clouds are transformed into raster images with a certain resolution by using the method of inverse distance weighted interpolation. An adaptive threshold segmentation algorithm is used to convert raster images into binary images. Followed by the adaptive threshold segmentation is the Euclidean clustering method, which is used to extract road markings point clouds from the binary image. Solid lines are detected by feature attribute filtering. All of the solid lines and guidelines in the sample data are correctly identified. The deep learning network framework PointNet++ is used for semantic recognition of the remaining road markings, including dashed lines, guidelines and arrows. Finally, the vectorization of the identified solid lines and dashed lines is carried out based on a line segmentation self-growth algorithm. The vectorization of the identified guidelines is carried out according to an alpha shape algorithm. Point cloud data from four experimental areas are used for road marking extraction and identification. The F-scores of the identification of dashed lines, guidelines, straight arrows and right turn arrows are 0.97, 0.66, 0.84 and 1, respectively.


2021 ◽  
Vol 13 (21) ◽  
pp. 4382
Author(s):  
Ziyang Wang ◽  
Lin Yang ◽  
Yehua Sheng ◽  
Mi Shen

Real-time acquisition and intelligent classification of pole-like street-object point clouds are of great significance in the construction of smart cities. Efficient point cloud processing technology in road scenes can accelerate the development of intelligent transportation and promote the development of high-precision maps. However, available algorithms have the problems of incomplete extraction and the low recognition accuracy of pole-like objects. In this paper, we propose a segmentation method of pole-like objects under geometric structural constraints. As for classification, we fused the classification results at different scales with each other. First, the point cloud data excluding ground point clouds were divided into voxels, and the rod-shaped parts of the pole-like objects were extracted according to the vertical continuity. Second, the regional growth based on the voxel was carried out based on the rod part to retain the non-rod part of the pole-like objects. A one-way double coding strategy was adopted to preserve the details. For spatial overlapping entities, we used multi-rule supervoxels to divide them. Finally, the random forest model was used to classify the pole-like objects based on local- and global-scale features and to fuse the double classification results under the different scales in order to obtain the final result. Experiments showed that the proposed method can effectively extract the pole-like objects of the point clouds in the road scenes, indicating that the method can achieve high-precision classification and identification in the lightweight data. Our method can also bring processing inspiration for large data.


Author(s):  
E. Maset ◽  
L. Magri ◽  
A. Fusiello

<p><strong>Abstract.</strong> In this paper we deal with the automatic reconstruction of interior walls from point clouds, an active research topic with several practical applications. We propose an improved version of the method presented in (Magri and Fusiello, 2018), where the overall structure of the environment is extracted by fitting lines to the main building features, using Min-hashed J-Linkage as a multi-model fitting technique. Our variation has the merit of producing more accurate results, both in terms of wall reconstruction and room segmentation, and greatly reducing the need for user-defined thresholds.</p>


2020 ◽  
Vol 12 (22) ◽  
pp. 3824
Author(s):  
Mingyao Ai ◽  
Yuan Yao ◽  
Qingwu Hu ◽  
Yue Wang ◽  
Wei Wang

Effective 3D tree reconstruction based on point clouds from terrestrial Light Detection and Ranging (LiDAR) scans (TLS) has been widely recognized as a critical technology in forestry and ecology modeling. The major advantages of using TLS lie in its rapidly and automatically capturing tree information at millimeter level, providing massive high-density data. In addition, TLS 3D tree reconstruction allows for occlusions and complex structures from the derived point cloud of trees to be obtained. In this paper, an automatic tree skeleton extraction approach based on multi-view slicing is proposed to improve the TLS 3D tree reconstruction, which borrowed the idea from the medical imaging technology of X-ray computed tomography. Firstly, we extracted the precise trunk center and then cut the point cloud of the tree into slices. Next, the skeleton from each slice was generated using the kernel mean shift and principal component analysis algorithms. Accordingly, these isolated skeletons were smoothed and morphologically synthetized. Finally, the validation in point clouds of two trees acquired from multi-view TLS further demonstrated the potential of the proposed framework in efficiently dealing with TLS point cloud data.


Sign in / Sign up

Export Citation Format

Share Document