scholarly journals Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation and Spatial Supervision

Author(s):  
Haojie Liu ◽  
Kang Liao ◽  
Chunyu Lin ◽  
Yao Zhao ◽  
Yulan Guo
2017 ◽  
Vol 25 (19) ◽  
pp. 23451 ◽  
Author(s):  
Florian Willomitzer ◽  
Gerd Häusler

Author(s):  
Guangming Wang ◽  
Chaokang Jiang ◽  
Zehang Shen ◽  
Yanzi Miao ◽  
Hesheng Wang

3D scene flow presents the 3D motion of each point in the 3D space, which forms the fundamental 3D motion perception for autonomous driving and server robots. Although the RGBD camera or LiDAR capture discrete 3D points in space, the objects and motions usually are continuous in the macro world. That is, the objects keep themselves consistent as they flow from the current frame to the next frame. Based on this insight, the Generative Adversarial Networks (GAN) is utilized to self-learn 3D scene flow with no need for ground truth. The fake point cloud of the second frame is synthesized from the predicted scene flow and the point cloud of the first frame. The adversarial training of the generator and discriminator is realized through synthesizing indistinguishable fake point cloud and discriminating the real point cloud and the synthesized fake point cloud. The experiments on KITTI scene flow dataset show that our method realizes promising results without ground truth. Just like a human observing a real-world scene, the proposed approach is capable of determining the consistency of the scene at different moments in spite of the exact flow value of each point is unknown in advance. Corresponding author(s) Email: [email protected]


2020 ◽  
Vol 29 ◽  
pp. 289-302 ◽  
Author(s):  
Li Li ◽  
Zhu Li ◽  
Vladyslav Zakharchenko ◽  
Jianle Chen ◽  
Houqiang Li

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 83538-83547
Author(s):  
Junsik Kim ◽  
Jiheon Im ◽  
Sungryeul Rhyu ◽  
Kyuheon Kim

2016 ◽  
Vol 136 (8) ◽  
pp. 1078-1084
Author(s):  
Shoichi Takei ◽  
Shuichi Akizuki ◽  
Manabu Hashimoto

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document