scholarly journals Efficient 3D Point Cloud Feature Learning for Large-Scale Place Recognition

Author(s):  
Le Hui ◽  
Mingmei Cheng ◽  
Jin Xie ◽  
Jian Yang ◽  
Ming-Ming Cheng
Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2870 ◽  
Author(s):  
Shaorong Xie ◽  
Chao Pan ◽  
Yaxin Peng ◽  
Ke Liu ◽  
Shihui Ying

In the field of autonomous driving, carriers are equipped with a variety of sensors, including cameras and LiDARs. However, the camera suffers from problems of illumination and occlusion, and the LiDAR encounters motion distortion, degenerate environment and limited ranging distance. Therefore, fusing the information from these two sensors deserves to be explored. In this paper, we propose a fusion network which robustly captures both the image and point cloud descriptors to solve the place recognition problem. Our contribution can be summarized as: (1) applying the trimmed strategy in the point cloud global feature aggregation to improve the recognition performance, (2) building a compact fusion framework which captures both the robust representation of the image and 3D point cloud, and (3) learning a proper metric to describe the similarity of our fused global feature. The experiments on KITTI and KAIST datasets show that the proposed fused descriptor is more robust and discriminative than the single sensor descriptor.


Author(s):  
M. Corsia ◽  
T. Chabardès ◽  
H. Bouchiba ◽  
A. Serna

Abstract. In this paper, we present a method to build Computer Aided Design (CAD) representations of dense 3D point cloud scenes by queries in a large CAD model database. This method is applied to real world industrial scenes for infrastructure modeling. The proposed method firstly relies on a region growing algorithm based on novel edge detection method. This algorithm is able to produce geometrically coherent regions which can be agglomerated in order to extract the objects of interest of an industrial environment. Each segment is then processed to compute relevant keypoints and multi-scale features in order to be compared to all CAD models from the database. The best fitting model is estimated together with the rigid six degree of freedom (6 DOF) transformation for positioning the CAD model on the 3D scene. The proposed novel keypoints extractor achieves robust and repeatable results that captures both thin geometrical details and global shape of objects. Our new multi-scale descriptor stacks geometrical information around each keypoint at short and long range, allowing non-ambiguous matching for object recognition and positioning. We illustrate the efficiency of our method in a real-world application on 3D segmentation and modeling of electrical substations.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


2018 ◽  
Vol 84 (5) ◽  
pp. 297-308 ◽  
Author(s):  
Timo Hackel ◽  
Jan D. Wegner ◽  
Nikolay Savinov ◽  
Lubor Ladicky ◽  
Konrad Schindler ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document