scholarly journals Local k-NNs pattern in Omni-Direction graph convolution neural network for 3D point clouds

2020 ◽  
Vol 413 ◽  
pp. 487-498
Author(s):  
Wenjing Zhang ◽  
Songzhi Su ◽  
Beizhan Wang ◽  
Qingqi Hong ◽  
Li Sun
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 87857-87869
Author(s):  
Jue Hou ◽  
Wenbin Ouyang ◽  
Bugao Xu ◽  
Rongwu Wang

2022 ◽  
Vol 41 (1) ◽  
pp. 1-21
Author(s):  
Chems-Eddine Himeur ◽  
Thibault Lejemble ◽  
Thomas Pellegrini ◽  
Mathias Paulin ◽  
Loic Barthe ◽  
...  

In recent years, Convolutional Neural Networks (CNN) have proven to be efficient analysis tools for processing point clouds, e.g., for reconstruction, segmentation, and classification. In this article, we focus on the classification of edges in point clouds, where both edges and their surrounding are described. We propose a new parameterization adding to each point a set of differential information on its surrounding shape reconstructed at different scales. These parameters, stored in a Scale-Space Matrix (SSM) , provide a well-suited information from which an adequate neural network can learn the description of edges and use it to efficiently detect them in acquired point clouds. After successfully applying a multi-scale CNN on SSMs for the efficient classification of edges and their neighborhood, we propose a new lightweight neural network architecture outperforming the CNN in learning time, processing time, and classification capabilities. Our architecture is compact, requires small learning sets, is very fast to train, and classifies millions of points in seconds.


2021 ◽  
Vol 7 (2) ◽  
pp. 57-74
Author(s):  
Lamyaa Gamal EL-Deen Taha ◽  
A. I. Ramzi ◽  
A. Syarawi ◽  
A. Bekheet

Until recently, the most highly accurate digital surface models were obtained from airborne lidar. With the development of a new generation of large format digital photogrammetric aerial camera, a fully digital photogrammetric workflow became possible. Digital airborne images are sources for elevation extraction and orthophoto generation. This research concerned with the generation of digital surface models and orthophotos as applications from high-resolution images.  In this research, the following steps were performed. A Benchmark data of LIDAR and digital aerial camera have been used.  Firstly, image orientation, AT have been performed. Then the automatic digital surface model DSM generation has been produced from the digital aerial camera. Thirdly true digital ortho has been generated from the digital aerial camera also orthoimage will be generated using LIDAR digital elevation model (DSM). Leica Photogrammetric Suite (LPS) module of Erdsa Imagine 2014 software was utilized for processing. Then the resulted orthoimages from both techniques were mosaicked. The results show that automatic digital surface model DSM that been produced from digital aerial camera method has very high dense photogrammetric 3D point clouds compared to the LIDAR 3D point clouds. It was found that the true orthoimage produced from the second approach is better than the true orthoimage produced from the first approach. The five approaches were tested for classification of the best-orthorectified image mosaic using subpixel based (neural network) and pixel-based ( minimum distance and maximum likelihood).Multicues were extracted such as texture(entropy-mean),Digital elevation model, Digital surface model ,normalized digital surface model (nDSM) and intensity image. The contributions of the individual cues used in the classification have been evaluated. It was found that the best cue integration is intensity (pan) +nDSM+ entropy followed by intensity (pan) +nDSM+mean then intensity image +mean+ entropy after that DSM )image and two texture measures (mean and entropy) followed by the colour image. The integration with height data increases the accuracy. Also, it was found that the integration with entropy texture increases the accuracy. Resulted in fifteen cases of classification it was found that maximum likelihood classifier is the best followed by minimum distance then neural network classifier. We attribute this to the fine resolution of the digital camera image. Subpixel classifier (neural network) is not suitable for classifying aerial digital camera images. 


Author(s):  
Sergei Voronin ◽  
Artyom Makovetskii ◽  
Aleksei Voronin ◽  
Dmitrii Zhernov

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8382
Author(s):  
Hongjae Lee ◽  
Jiyoung Jung

Urban scene modeling is a challenging but essential task for various applications, such as 3D map generation, city digitization, and AR/VR/metaverse applications. To model man-made structures, such as roads and buildings, which are the major components in general urban scenes, we present a clustering-based plane segmentation neural network using 3D point clouds, called hybrid K-means plane segmentation (HKPS). The proposed method segments unorganized 3D point clouds into planes by training the neural network to estimate the appropriate number of planes in the point cloud based on hybrid K-means clustering. We consider both the Euclidean distance and cosine distance to cluster nearby points in the same direction for better plane segmentation results. Our network does not require any labeled information for training. We evaluated the proposed method using the Virtual KITTI dataset and showed that our method outperforms conventional methods in plane segmentation. Our code is publicly available.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3681 ◽  
Author(s):  
Le Zhang ◽  
Jian Sun ◽  
Qiang Zheng

The recognition of three-dimensional (3D) lidar (light detection and ranging) point clouds remains a significant issue in point cloud processing. Traditional point cloud recognition employs the 3D point clouds from the whole object. Nevertheless, the lidar data is a collection of two-and-a-half-dimensional (2.5D) point clouds (each 2.5D point cloud comes from a single view) obtained by scanning the object within a certain field angle by lidar. To deal with this problem, we initially propose a novel representation which expresses 3D point clouds using 2.5D point clouds from multiple views and then we generate multi-view 2.5D point cloud data based on the Point Cloud Library (PCL). Subsequently, we design an effective recognition model based on a multi-view convolutional neural network. The model directly acts on the raw 2.5D point clouds from all views and learns to get a global feature descriptor by fusing the features from all views by the view fusion network. It has been proved that our approach can achieve an excellent recognition performance without any requirement for three-dimensional reconstruction and the preprocessing of point clouds. In conclusion, this paper can effectively solve the recognition problem of lidar point clouds and provide vital practical value.


Author(s):  
SU YAN ◽  
Lei Yu

Abstract Simultaneous Localization and Mapping (SLAM) is one of the key technologies used in sweepers, autonomous vehicles, virtual reality and other fields. This paper presents a dense RGB-D SLAM reconstruction algorithm based on convolutional neural network of multi-layer image invariant feature transformation. The main contribution of the system lies in the construction of a convolutional neural network based on multi-layer image invariant feature, which optimized the extraction of ORB (Oriented FAST and Rotated Brief) feature points and the reconstruction effect. After the feature point matching, pose estimation, loop detection and other steps, the 3D point clouds were finally spliced to construct a complete and smooth spatial model. The system can improve the accuracy and robustness in feature point processing and pose estimation. Comparative experiments show that the optimized algorithm saves 0.093s compared to the ordinary extraction algorithm while guaranteeing a high accuracy rate at the same time. The results of reconstruction experiments show that the spatial models have more clear details, smoother connection with no fault layers than the original ones. The reconstruction results are generally better than other common algorithms, such as Kintinuous, Elasticfusion and ORBSLAM2 dense reconstruction.


Sign in / Sign up

Export Citation Format

Share Document