scholarly journals Cost Effective Mobile Mapping System for Color Point Cloud Reconstruction

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6536
Author(s):  
Cheng-Wei Peng ◽  
Chen-Chien Hsu ◽  
Wei-Yen Wang

Survey-grade Lidar brands have commercialized Lidar-based mobile mapping systems (MMSs) for several years now. With this high-end equipment, the high-level accuracy quality of point clouds can be ensured, but unfortunately, their high cost has prevented practical implementation in autonomous driving from being affordable. As an attempt to solve this problem, we present a cost-effective MMS to generate an accurate 3D color point cloud for autonomous vehicles. Among the major processes for color point cloud reconstruction, we first synchronize the timestamps of each sensor. The calibration process between camera and Lidar is developed to obtain the translation and rotation matrices, based on which color attributes can be composed into the corresponding Lidar points. We also employ control points to adjust the point cloud for fine tuning the absolute position. To overcome the limitation of Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU) positioning system, we utilize Normal Distribution Transform (NDT) localization to refine the trajectory to solve the multi-scan dispersion issue. Experimental results show that the color point cloud reconstructed by the proposed MMS has a position error in centimeter-level accuracy, meeting the requirement of high definition (HD) maps for autonomous driving usage.

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 899 ◽  
Author(s):  
Veli Ilci ◽  
Charles Toth

Recent developments in sensor technologies such as Global Navigation Satellite Systems (GNSS), Inertial Measurement Unit (IMU), Light Detection and Ranging (LiDAR), radar, and camera have led to emerging state-of-the-art autonomous systems, such as driverless vehicles or UAS (Unmanned Airborne Systems) swarms. These technologies necessitate the use of accurate object space information about the physical environment around the platform. This information can be generally provided by the suitable selection of the sensors, including sensor types and capabilities, the number of sensors, and their spatial arrangement. Since all these sensor technologies have different error sources and characteristics, rigorous sensor modeling is needed to eliminate/mitigate errors to obtain an accurate, reliable, and robust integrated solution. Mobile mapping systems are very similar to autonomous vehicles in terms of being able to reconstruct the environment around the platforms. However, they differ a lot in operations and objectives. Mobile mapping vehicles use professional grade sensors, such as geodetic grade GNSS, tactical grade IMU, mobile LiDAR, and metric cameras, and the solution is created in post-processing. In contrast, autonomous vehicles use simple/inexpensive sensors, require real-time operations, and are primarily interested in identifying and tracking moving objects. In this study, the main objective was to assess the performance potential of autonomous vehicle sensor systems to obtain high-definition maps based on only using Velodyne sensor data for creating accurate point clouds. In other words, no other sensor data were considered in this investigation. The results have confirmed that cm-level accuracy can be achieved.


Author(s):  
Y. Dehbi ◽  
L. Lucks ◽  
J. Behmann ◽  
L. Klingbeil ◽  
L. Plümer

Abstract. Accurate and robust positioning of vehicles in urban environments is of high importance for many applications (e.g. autonomous driving or mobile mapping). In the case of mobile mapping systems, a simultaneous mapping of the environment using laser scanning and an accurate positioning using GNSS is targeted. This requirement is often not guaranteed in shadowed cities where GNSS signals are usually disturbed, weak or even unavailable. Both, the generated point clouds and the derived trajectory are consequently imprecise. We propose a novel approach which incorporates prior knowledge, i.e. 3D building model of the environment, and improves the point cloud and the trajectory. The key idea is to benefit from the complementarity of both GNSS and 3D building models. The point cloud is matched to the city model using a point-to-plane ICP. An informed sampling of appropriate matching points is enabled by a pre-classification step. Support vector machines (SVMs) are used to discriminate between facade and remaining points. Local inconsistencies are tackled by a segment-wise partitioning of the point cloud where an interpolation guarantees a seamless transition between the segments. The full processing chain is implemented from the detection of facades in the point clouds, the matching between them and the building models and the update of the trajectory estimate. The general applicability of the implemented method is demonstrated on an inner city data set recorded with a mobile mapping system.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4423 ◽  
Author(s):  
Hu ◽  
Yang ◽  
Li

Environment perception is critical for feasible path planning and safe driving for autonomous vehicles. Perception devices, such as camera, LiDAR (Light Detection and Ranging), IMU(Inertial Measurement Unit), etc., only provide raw sensing data with no identification of vital objects, which is insufficient for autonomous vehicles to perform safe and efficient self-driving operations. This study proposes an improved edge-oriented segmentation-based method to detect the objects from the sensed three-dimensional (3D) point cloud. The improved edge-oriented segmentation-based method consists of three main steps: First, the bounding areas of objects are identified by edge detection and stixel estimation in corresponding two-dimensional (2D) images taken by a stereo camera. Second, 3D sparse point clouds of objects are reconstructed in bounding areas. Finally, the dense point clouds of objects are segmented by matching the 3D sparse point clouds of objects with the whole scene point cloud. After comparison with the existing methods of segmentation, the experimental results demonstrate that the proposed edge-oriented segmentation method improves the precision of 3D point cloud segmentation, and that the objects can be segmented accurately. Meanwhile, the visualization of output data in advanced driving assistance systems (ADAS) can be greatly facilitated due to the decrease in computational time and the decrease in the number of points in the object’s point cloud.


Author(s):  
L. Gigli ◽  
B. R. Kiran ◽  
T. Paul ◽  
A. Serna ◽  
N. Vemuri ◽  
...  

Abstract. Point cloud datasets for perception tasks in the context of autonomous driving often rely on high resolution 64-layer Light Detection and Ranging (LIDAR) scanners. They are expensive to deploy on real-world autonomous driving sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the effect of subsampling image based representations of dense point clouds on the accuracy of the road segmentation task. In our experiments the low resolution 16/32 layer LIDAR point clouds are simulated by subsampling the original 64 layer data, for subsequent transformation in to a feature map in the Bird-Eye-View(BEV) and Spherical-View (SV) representations of the point cloud. We introduce the usage of the local normal vector with the LIDAR’s spherical coordinates as an input channel to existing LoDNN architectures. We demonstrate that this local normal feature in conjunction with classical features not only improves performance for binary road segmentation on full resolution point clouds, but it also reduces the negative impact on the accuracy when subsampling dense point clouds as compared to the usage of classical features alone. We assess our method with several experiments on two datasets: KITTI Road-segmentation benchmark and the recently released Semantic KITTI dataset.


2021 ◽  
Vol 13 (15) ◽  
pp. 2868
Author(s):  
Yonglin Tian ◽  
Xiao Wang ◽  
Yu Shen ◽  
Zhongzheng Guo ◽  
Zilei Wang ◽  
...  

Three-dimensional information perception from point clouds is of vital importance for improving the ability of machines to understand the world, especially for autonomous driving and unmanned aerial vehicles. Data annotation for point clouds is one of the most challenging and costly tasks. In this paper, we propose a closed-loop and virtual–real interactive point cloud generation and model-upgrading framework called Parallel Point Clouds (PPCs). To our best knowledge, this is the first time that the training model has been changed from an open-loop to a closed-loop mechanism. The feedback from the evaluation results is used to update the training dataset, benefiting from the flexibility of artificial scenes. Under the framework, a point-based LiDAR simulation model is proposed, which greatly simplifies the scanning operation. Besides, a group-based placing method is put forward to integrate hybrid point clouds, via locating candidate positions for virtual objects in real scenes. Taking advantage of the CAD models and mobile LiDAR devices, two hybrid point cloud datasets, i.e., ShapeKITTI and MobilePointClouds, are built for 3D detection tasks. With almost zero labor cost on data annotation for newly added objects, the models (PointPillars) trained with ShapeKITTI and MobilePointClouds achieved 78.6% and 60.0% of the average precision of the model trained with real data on 3D detection, respectively.


Author(s):  
C. Wen ◽  
S. Lin ◽  
C. Wang ◽  
J. Li

Point clouds acquired by RGB-D camera-based indoor mobile mapping system suffer the problems of being noisy, exhibiting an uneven distribution, and incompleteness, which are the problems that introduce difficulties for point cloud planar surface segmentation. This paper presents a novel color-enhanced hybrid planar surface segmentation model for RGB-D camera-based indoor mobile mapping point clouds based on region growing method, and the model specially addresses the planar surface extraction task over point cloud according to the noisy and incomplete indoor mobile mapping point clouds. The proposed model combines the color moments features with the curvature feature to select the seed points better. Additionally, a more robust growing criteria based on the hybrid features is developed to avoid the generation of excessive over-segmentation debris. A segmentation evaluation process with a small set of labeled segmented data is used to determine the optimal hybrid weight. Several comparative experiments were conducted to evaluate the segmentation model, and the experimental results demonstrate the effectiveness and efficiency of the proposed hybrid segmentation method for indoor mobile mapping three-dimensional (3D) point cloud data.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4703
Author(s):  
Yookhyun Yoon ◽  
Taeyeon Kim ◽  
Ho Lee ◽  
Jahnghyon Park

For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within the constraint of the road shape. Herein, we present a novel road-aware trajectory prediction method which leverages the use of high-definition maps with a deep learning network. We developed a data-efficient learning framework for the trajectory prediction network in the curvilinear coordinate system of the road and a lane assignment for the surrounding vehicles. Then, we proposed a novel output-constrained sequence-to-sequence trajectory prediction network to incorporate the structural constraints of the road. Our method uses these structural constraints as prior knowledge for the prediction network. It is not only used as an input to the trajectory prediction network, but is also included in the constrained loss function of the maneuver recognition network. Accordingly, the proposed method can predict a feasible and realistic intention of the driver and trajectory. Our method has been evaluated using a real traffic dataset, and the results thus obtained show that it is data-efficient and can predict reasonable trajectories at merging sections.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1573 ◽  
Author(s):  
Haojie Liu ◽  
Kang Liao ◽  
Chunyu Lin ◽  
Yao Zhao ◽  
Meiqin Liu

LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Ming Guo ◽  
Bingnan Yan ◽  
Tengfei Zhou ◽  
Deng Pan ◽  
Guoli Wang

To obtain high-precision measurement data using vehicle-borne light detection and ranging (LiDAR) scanning (VLS) systems, calibration is necessary before a data acquisition mission. Thus, a novel calibration method based on a homemade target ball is proposed to estimate the system mounting parameters, which refer to the rotational and translational offsets between the LiDAR sensor and inertial measurement unit (IMU) orientation and position. Firstly, the spherical point cloud is fitted into a sphere to extract the coordinates of the centre, and each scan line on the sphere is fitted into a section of the sphere to calculate the distance ratio from the centre to the nearest two sections, and the attitude and trajectory parameters of the centre are calculated by linear interpolation. Then, the real coordinates of the centre of the sphere are calculated by measuring the coordinates of the reflector directly above the target ball with the total station. Finally, three rotation parameters and three translation parameters are calculated by two least-squares adjustments. Comparisons of the point cloud before and after calibration and the calibrated point clouds with the point cloud obtained by the terrestrial laser scanner show that the accuracy significantly improved after calibration. The experiment indicates that the calibration method based on the homemade target ball can effectively improve the accuracy of the point cloud, which can promote VLS development and applications.


Sign in / Sign up

Export Citation Format

Share Document