scholarly journals LiDAR Data Enrichment by Fusing Spatial and Temporal Adjacent Frames

2021 ◽  
Vol 13 (18) ◽  
pp. 3640
Author(s):  
Hao Fu ◽  
Hanzhang Xue ◽  
Xiaochang Hu ◽  
Bokai Liu

In autonomous driving scenarios, the point cloud generated by LiDAR is usually considered as an accurate but sparse representation. In order to enrich the LiDAR point cloud, this paper proposes a new technique that combines spatial adjacent frames and temporal adjacent frames. To eliminate the “ghost” artifacts caused by moving objects, a moving point identification algorithm is introduced that employs the comparison between range images. Experiments are performed on the publicly available Semantic KITTI dataset. Experimental results show that the proposed method outperforms most of the previous approaches. Compared with these previous works, the proposed method is the only method that can run in real-time for online usage.

2011 ◽  
Vol 382 ◽  
pp. 418-421
Author(s):  
Dong Yan Cui ◽  
Zai Xing Xie

This paper presents an automatic program to track in moving objects, using segmentation algorithm quickly and efficiently after the division of a moving object, in the follow-up frame through the establishment of inter-frame vectors to track moving objects of interest. Experimental results show that the algorithm can accurately and effectively track moving objects of interest, and because the algorithm is simple, the computational complexity is small, can be well positioned to meet real-time monitoring system in the extraction of moving objects of interest and tracking needs.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 11
Author(s):  
Xing Xie ◽  
Lin Bai ◽  
Xinming Huang

LiDAR has been widely used in autonomous driving systems to provide high-precision 3D geometric information about the vehicle’s surroundings for perception, localization, and path planning. LiDAR-based point cloud semantic segmentation is an important task with a critical real-time requirement. However, most of the existing convolutional neural network (CNN) models for 3D point cloud semantic segmentation are very complex and can hardly be processed at real-time on an embedded platform. In this study, a lightweight CNN structure was proposed for projection-based LiDAR point cloud semantic segmentation with only 1.9 M parameters that gave an 87% reduction comparing to the state-of-the-art networks. When evaluated on a GPU, the processing time was 38.5 ms per frame, and it achieved a 47.9% mIoU score on Semantic-KITTI dataset. In addition, the proposed CNN is targeted on an FPGA using an NVDLA architecture, which results in a 2.74x speedup over the GPU implementation with a 46 times improvement in terms of power efficiency.


2022 ◽  
Author(s):  
Yuehua Zhao ◽  
Ma Jie ◽  
Chong Nannan ◽  
Wen Junjie

Abstract Real time large scale point cloud segmentation is an important but challenging task for practical application like autonomous driving. Existing real time methods have achieved acceptance performance by aggregating local information. However, most of them only exploit local spatial information or local semantic information dependently, few considering the complementarity of both. In this paper, we propose a model named Spatial-Semantic Incorporation Network (SSI-Net) for real time large scale point cloud segmentation. A Spatial-Semantic Cross-correction (SSC) module is introduced in SSI-Net as a basic unit. High quality contextual features can be learned through SSC by correct and update semantic features using spatial cues, and vice verse. Adopting the plug-and-play SSC module, we design SSI-Net as an encoder-decoder architecture. To ensure efficiency, it also adopts a random sample based hierarchical network structure. Extensive experiments on several prevalent datasets demonstrate that our method can achieve state-of-the-art performance.


2018 ◽  
Vol 3 (4) ◽  
pp. 3434-3440 ◽  
Author(s):  
Yiming Zeng ◽  
Yu Hu ◽  
Shice Liu ◽  
Jing Ye ◽  
Yinhe Han ◽  
...  

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 276 ◽  
Author(s):  
Jiyoung Jung ◽  
Sung-Ho Bae

The generation of digital maps with lane-level resolution is rapidly becoming a necessity, as semi- or fully-autonomous driving vehicles are now commercially available. In this paper, we present a practical real-time working prototype for road lane detection using LiDAR data, which can be further extended to automatic lane-level map generation. Conventional lane detection methods are limited to simple road conditions and are not suitable for complex urban roads with various road signs on the ground. Given a 3D point cloud scanned by a 3D LiDAR sensor, we categorized the points of the drivable region and distinguished the points of the road signs on the ground. Then, we developed an expectation-maximization method to detect parallel lines and update the 3D line parameters in real time, as the probe vehicle equipped with the LiDAR sensor moved forward. The detected and recorded line parameters were integrated to build a lane-level digital map with the help of a GPS/INS sensor. The proposed system was tested to generate accurate lane-level maps of two complex urban routes. The experimental results showed that the proposed system was fast and practical in terms of effectively detecting road lines and generating lane-level maps.


2020 ◽  
Vol 12 (22) ◽  
pp. 3830
Author(s):  
Hui Liu ◽  
Ciyun Lin ◽  
Dayong Wu ◽  
Bowen Gong

More and more scholars are committed to light detection and ranging (LiDAR) as a roadside sensor to obtain traffic flow data. Filtering and clustering are common methods to extract pedestrians and vehicles from point clouds. This kind of method ignores the impact of environmental information on traffic. The segmentation process is a crucial part of detailed scene understanding, which could be especially helpful for locating, recognizing, and classifying objects in certain scenarios. However, there are few studies on the segmentation of low-channel (16 channels in this paper) roadside 3D LiDAR. This paper presents a novel segmentation (slice-based) method for point clouds of roadside LiDAR. The proposed method can be divided into two parts: the instance segmentation part and semantic segmentation part. The part of the instance segmentation of point cloud is based on the regional growth method, and we proposed a seed point generation method for low-channel LiDAR data. Furthermore, we optimized the instance segmentation effect under occlusion. The part of semantic segmentation of a point cloud is realized by classifying and labeling the objects obtained by instance segmentation. For labeling static objects, we represented and classified a certain object through the related features derived from its slices. For labeling moving objects, we proposed a recurrent neural network (RNN)-based model, of which the accuracy could be up to 98.7%. The result implies that the slice-based method can obtain a good segmentation effect and the slice has good potential for point cloud segmentation.


Author(s):  
T. Wu ◽  
B. Vallet ◽  
C. Demonceaux ◽  
J. Liu

Abstract. Indoor mapping attracts more attention with the development of 2D and 3D camera and Lidar sensor. Lidar systems can provide a very high resolution and accurate point cloud. When aiming to reconstruct the static part of the scene, moving objects should be detected and removed which can prove challenging. This paper proposes a generic method to merge meshes produced from Lidar data that allows to tackle the issues of moving objects removal and static scene reconstruction at once. The method is adapted to a platform collecting point cloud from two Lidar sensors with different scan direction, which will result in different quality. Firstly, a mesh is efficiently produced from each sensor by exploiting its natural topology. Secondly, a visibility analysis is performed to handle occlusions (due to varying viewpoints) and remove moving objects. Then, a boolean optimization allows to select which triangles should be removed from each mesh. Finally, a stitching method is used to connect the selected mesh pieces. Our method is demonstrated on a Navvis M3 (2D laser ranger system) dataset and compared with Poisson and Delaunay based reconstruction methods.


2021 ◽  
Vol 257 ◽  
pp. 02055
Author(s):  
Sijia Liu ◽  
Jie Luo ◽  
Jinmin Hu ◽  
Haoru Luo ◽  
Yu Liang

Autonomous driving technology is one of the currently popular technologies, while positioning is the basic problem of autonomous navigation of autonomous vehicles. GPS is widely used as a relatively mature solution in the outdoor open road environment. However, GPS signals will be greatly affected in a complex environment with obstruction and electromagnetic interference, even signal loss may occur if serious, which has a great impact on the accuracy, stability and reliability of positioning. For the time being, L4 and most L3 autonomous driving modules still provide registration and positioning based on the high-precision map constructed. Based on this, this paper elaborates on the reconstruction of the experimental scene environment, using the SLAM (simultaneous localization and mapping) method to construct a highprecision point cloud map. On the constructed prior map, the 3D laser point cloud NDT matching method is used for real-time positioning, which is tested and verified on the “JAC Electric Vehicle” platform. The experimental results show that this algorithm has high positioning accuracy and its real-time performance meets the requirements, which can replace GPS signals to complete the positioning of autonomous vehicles when there is no GPS signal or the GPS signal is weak, and provide positioning accuracy meeting the requirements.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6264
Author(s):  
Xinyuan Tu ◽  
Jian Zhang ◽  
Runhao Luo ◽  
Kai Wang ◽  
Qingji Zeng ◽  
...  

We present a real-time Truncated Signed Distance Field (TSDF)-based three-dimensional (3D) semantic reconstruction for LiDAR point cloud, which achieves incremental surface reconstruction and highly accurate semantic segmentation. The high-precise 3D semantic reconstruction in real time on LiDAR data is important but challenging. Lighting Detection and Ranging (LiDAR) data with high accuracy is massive for 3D reconstruction. We so propose a line-of-sight algorithm to update implicit surface incrementally. Meanwhile, in order to use more semantic information effectively, an online attention-based spatial and temporal feature fusion method is proposed, which is well integrated into the reconstruction system. We implement parallel computation in the reconstruction and semantic fusion process, which achieves real-time performance. We demonstrate our approach on the CARLA dataset, Apollo dataset, and our dataset. When compared with the state-of-art mapping methods, our method has a great advantage in terms of both quality and speed, which meets the needs of robotic mapping and navigation.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1960
Author(s):  
Dongwan Kang ◽  
Anthony Wong ◽  
Banghyon Lee ◽  
Jungha Kim

Autonomous vehicles perceive objects through various sensors. Cameras, radar, and LiDAR are generally used as vehicle sensors, each of which has its own characteristics. As examples, cameras are used for a high-level understanding of a scene, radar is applied to weather-resistant distance perception, and LiDAR is used for accurate distance recognition. The ability of a camera to understand a scene has overwhelmingly increased with the recent development of deep learning. In addition, technologies that emulate other sensors using a single sensor are being developed. Therefore, in this study, a LiDAR data-based scene understanding method was developed through deep learning. The approaches to accessing LiDAR data through deep learning are mainly divided into point, projection, and voxel methods. The purpose of this study is to apply a projection method to secure a real-time performance. The convolutional neural network method used by a conventional camera can be easily applied to the projection method. In addition, an adaptive break point detector method used for conventional 2D LiDAR information is utilized to solve the misclassification caused by the conversion from 2D into 3D. The results of this study are evaluated through a comparison with other technologies.


Sign in / Sign up

Export Citation Format

Share Document