scholarly journals CLASSIFICATION OF MLS POINT CLOUDS IN URBAN SCENES USING DETRENDED GEOMETRIC FEATURES FROM SUPERVOXEL-BASED LOCAL CONTEXTS

Author(s):  
Z. Sun ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.

2022 ◽  
Author(s):  
Yu Xiang ◽  
Liwei Hu ◽  
Jun Zhang ◽  
Wenyong Wang

Abstract The perception of geometric-features of airfoils is the basis in aerodynamic area for performance prediction, parameterization, aircraft inverse design, etc. There are three approaches to percept the geometric shape of airfoils, namely manual design of airfoil geometry parameters, polynomial definition and deep learning. The first two methods directly define geometric-features or polynomials of airfoil curves, but the number of extracted features is limited. Deep learning algorithms can extract a large number of potential features (called latent features). However, the features extracted by deep learning lack explicit geometrical meaning. Motivated by the advantages of polynomial definition and deep learning, we propose a geometric-feature extraction method (named Bézier-based feature extraction, BFE) for airfoils, which consists of two parts: manifold metric feature extraction and geometric-feature fusion encoder (GF encoder). Manifold metric feature extraction, with the help of the Bézier curve, captures manifold metrics (a sort of geometric-features) from tangent space of airfoil curves, and the GF-encoder combines airfoil coordinate data and manifold metrics together to form novel fused geometric-features. To validate the feasibility of the fused geometric-features, two experiments based on the public UIUC airfoil dataset are conducted. Experiment I is used to extract manifold metrics of airfoils and export the fused geometric-features. Experiment II, based on the Multi-task learning (MTL), is used to fuse the discrepant data (i.e., the fused geometric-features and the flight conditions) to predict the aerodynamic performance of airfoils. The results show that the BFE can generate more smooth and realistic airfoils than Auto-Encoder, and the fused geometric-features extracted by BFE can be used to reduce the prediction errors of C L and C D .


Cobot ◽  
2022 ◽  
Vol 1 ◽  
pp. 2
Author(s):  
Hao Peng ◽  
Guofeng Tong ◽  
Zheng Li ◽  
Yaqi Wang ◽  
Yuyuan Shao

Background: 3D object detection based on point clouds in road scenes has attracted much attention recently. The voxel-based methods voxelize the scene to regular grids, which can be processed with the advanced feature learning frameworks based on convolutional layers for semantic feature learning. The point-based methods can extract the geometric feature of the point due to the coordinate reservations. The combination of the two is effective for 3D object detection. However, the current methods use a voxel-based detection head with anchors for classification and localization. Although the preset anchors cover the entire scene, it is not suitable for detection tasks with larger scenes and multiple categories of objects, due to the limitation of the voxel size. Additionally, the misalignment between the predicted confidence and proposals in the Regions of the Interest (ROI) selection bring obstacles to 3D object detection. Methods: We investigate the combination of voxel-based methods and point-based methods for 3D object detection. Additionally, a voxel-to-point module that captures semantic and geometric features is proposed in the paper. The voxel-to-point module is conducive to the detection of small-size objects and avoids the presets of anchors in the inference stage. Moreover, a confidence adjustment module with the center-boundary-aware confidence attention is proposed to solve the misalignment between the predicted confidence and proposals in the regions of the interest selection. Results: The proposed method has achieved state-of-the-art results for 3D object detection in the  Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) object detection dataset. Actually, as of September 19, 2021, our method ranked 1st in the 3D and Bird Eyes View (BEV) detection of cyclists tagged with difficulty level ‘easy’, and ranked 2nd in the 3D detection of cyclists tagged with ‘moderate’. Conclusions: We propose an end-to-end two-stage 3D object detector with voxel-to-point module and confidence adjustment module.


2021 ◽  
Vol 13 (17) ◽  
pp. 3484
Author(s):  
Jie Wan ◽  
Zhong Xie ◽  
Yongyang Xu ◽  
Ziyin Zeng ◽  
Ding Yuan ◽  
...  

Feature extraction on point clouds is an essential task when analyzing and processing point clouds of 3D scenes. However, there still remains a challenge to adequately exploit local fine-grained features on point cloud data due to its irregular and unordered structure in a 3D space. To alleviate this problem, a Dilated Graph Attention-based Network (DGANet) with a certain feature for learning ability is proposed. Specifically, we first build a local dilated graph-like region for each input point to establish the long-range spatial correlation towards its corresponding neighbors, which allows the proposed network to access a wider range of geometric information of local points with their long-range dependencies. Moreover, by integrating the dilated graph attention module (DGAM) implemented by a novel offset–attention mechanism, the proposed network promises to highlight the differing ability of each edge of the constructed local graph to uniquely learn the discrepancy feature of geometric attributes between the connected point pairs. Finally, all the learned edge attention features are further aggregated, allowing the most significant geometric feature representation of local regions by the graph–attention pooling to fully extract local detailed features for each point. The validation experiments using two challenging benchmark datasets demonstrate the effectiveness and powerful generation ability of our proposed DGANet in both 3D object classification and segmentation tasks.


Author(s):  
Y. Xu ◽  
Z. Sun ◽  
R. Boerner ◽  
T. Koch ◽  
L. Hoegner ◽  
...  

In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.


2016 ◽  
Vol 9 (2) ◽  
pp. 140 ◽  
Author(s):  
Eman Fares Al Mashagba

<span style="font-size: 10.5pt; font-family: 'Times New Roman','serif'; mso-ansi-language: EN-US; mso-bidi-font-size: 12.0pt; mso-fareast-font-family: 宋体; mso-font-kerning: 1.0pt; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">Biometric technology has attracted much attention in biometric recognition. Significant online and offline applications satisfy security and human identification based on this technology. Biometric technology identifies a human based on unique features possessed by a person. Biometric features may be physiological or behavioral. A physiological feature is based on the direct measurement of a part of the human body such as a fingerprint, face, iris, blood vessel pattern at the back of the eye, vascular patterns, DNA, and hand or palm scan recognition. A behavioral feature is based on data derived from an action performed by the user. Thus, this feature measures the characteristics of the human body such as signature/handwriting, gait, voice, gesture, and keystroke dynamics. A biometric system is performed as follows: acquisition, comparison, feature extraction, and matching. The most important step is feature extraction, which determines the performance of human identification. Different methods are used for extraction, namely, appearance- and geometry-based methods. This paper reports on a review of human identification based on geometric feature extraction using several biometric systems available. We compared the different biometrics in biometric technology based on the geometric features extracted in different studies. Several biometric approaches have more geometric features, such as hand, gait, face, fingerprint, and signature features, compared with other biometric technology. Thus, geometry-based method with different biometrics can be applied simply and efficiently. The eye region extracted from the face is mainly used in face recognition. In addition, the extracted eye region has more details as the iris features.</span>


Author(s):  
D. Frommholz

<p><strong>Abstract.</strong> This paper describes the construction and composition of a synthetic test world for the validation of photogrammetric algorithms. Since its 3D objects are entirely generated by software, the geometric accuracy of the scene does not suffer from measurement errors which existing real-world ground truth is inherently afflicted with. The resulting data set covers an area of 13188 by 6144 length units and exposes positional residuals as small as the machine epsilon of the double-precision floating point numbers used exclusively for the coordinates. It is colored with high-resolution textures to accommodate the simulation of virtual flight campaigns with large optical sensors and laser scanners in both aerial and close-range scenarios. To specifically support the derivation of image samples and point clouds, the synthetic scene gets stored in the human-readable Alias/Wavefront OBJ and POV-Ray data formats. While conventional rasterization remains possible, using the open-source ray tracer as a render tool facilitates the creation of ideal pinhole bitmaps, consistent digital surface models (DSMs), true ortho-mosaics (TOMs) and orientation metadata without programming knowledge. To demonstrate the application of the constructed 3D scene, example validation recipes are discussed in detail for a state-of-the-art implementation of semi-global matching and a perspective-correct multi-source texture mapper. For the latter, beyond the visual assessment, a statistical evaluation of the achieved texture quality is given.</p>


2021 ◽  
Vol 10 (7) ◽  
pp. 482
Author(s):  
Zhizhong Xing ◽  
Shuanfeng Zhao ◽  
Wei Guo ◽  
Xiaojun Guo ◽  
Yuan Wang

Point cloud data can accurately and intuitively reflect the spatial relationship between the coal wall and underground fully mechanized mining equipment. However, the indirect method of point cloud feature extraction based on deep neural networks will lose some of the spatial information of the point cloud, while the direct method will lose some of the local information of the point cloud. Therefore, we propose the use of dynamic graph convolution neural network (DGCNN) to extract the geometric features of the sphere in the point cloud of the fully mechanized mining face (FMMF) in order to obtain the position of the sphere (marker) in the point cloud of the FMMF, thus providing a direct basis for the subsequent transformation of the FMMF coordinates to the national geodetic coordinates with the sphere as the intermediate medium. Firstly, we completed the production of a diversity sphere point cloud (training set) and an FMMF point cloud (test set). Secondly, we further improved the DGCNN to enhance the effect of extracting the geometric features of the sphere in the FMMF. Finally, we compared the effect of the improved DGCNN with that of PointNet and PointNet++. The results show the correctness and feasibility of using DGCNN to extract the geometric features of point clouds in the FMMF and provide a new method for the feature extraction of point clouds in the FMMF. At the same time, the results provide a direct early guarantee for analyzing the point cloud data of the FMMF under the national geodetic coordinate system in the future. This can provide an effective basis for the straightening and inclining adjustment of scraper conveyors, and it is of great significance for the transparent, unmanned, and intelligent mining of the FMMF.


Sign in / Sign up

Export Citation Format

Share Document