A Low-Dimensional Feature Transform for Keypoint Matching and Classification of Point Clouds without Normal Computation

Author(s):  
Viktor Seib ◽  
Dietrich Paulus
2020 ◽  
Vol 1 (1) ◽  
pp. 21-27
Author(s):  
Daniel Dos Santos ◽  
Leonardo Filho ◽  
Paulo De Oliveira Jr ◽  
Henrique De Oliveira

In traditional attitude mounting misalignment estimation methods for the calibration of unmanned autonomous vehicle (UAV) based light detection and ranging (LiDAR) system, signalized targets and iterative corresponding models are required, which makes it highly cost and computationally time-consuming. This paper presents an attitude mounting misalignment estimation (AMME) method for the calibration of UAV LiDAR system. The proposed method is divided into the coarse registration of LiDAR strips and the estimation of the attitude mounting misalignment. Firstly, 3D keypoints are extracted in the point clouds using the scale-invariant feature transform (SIFT) algorithm. Afterwards, the point feature transform (PFH) descriptor is used for 3D keypoint matching. Then, the coarse registration is executed. In the second part of the contribution, the systematic errors in the attitude mounting misalignment are estimated by incorporating the proposed triangular irregular network (TIN) corresponding model into the calibration modelling. Using the TIN-based corresponding model saves time and cost for AMME method. Furthermore, it provides two important effects: practical and computational, as no designed calibration boards, segmentation and iterative matching are needed. The performance of the proposed method is demonstrated under an UAV LiDAR data onboarded with lightweight navigation sensors. The experimental results show the efficacy of the method in comparison with a state-of-the-art method.


Author(s):  
Benson Farb ◽  
Dan Margalit

The study of the mapping class group Mod(S) is a classical topic that is experiencing a renaissance. It lies at the juncture of geometry, topology, and group theory. This book explains as many important theorems, examples, and techniques as possible, quickly and directly, while at the same time giving full details and keeping the text nearly self-contained. The book is suitable for graduate students. It begins by explaining the main group-theoretical properties of Mod(S), from finite generation by Dehn twists and low-dimensional homology to the Dehn–Nielsen–Baer–theorem. Along the way, central objects and tools are introduced, such as the Birman exact sequence, the complex of curves, the braid group, the symplectic representation, and the Torelli group. The book then introduces Teichmüller space and its geometry, and uses the action of Mod(S) on it to prove the Nielsen-Thurston classification of surface homeomorphisms. Topics include the topology of the moduli space of Riemann surfaces, the connection with surface bundles, pseudo-Anosov theory, and Thurston's approach to the classification.


2020 ◽  
Vol 10 (5) ◽  
pp. 1797 ◽  
Author(s):  
Mera Kartika Delimayanti ◽  
Bedy Purnama ◽  
Ngoc Giang Nguyen ◽  
Mohammad Reza Faisal ◽  
Kunti Robiatul Mahmudah ◽  
...  

Manual classification of sleep stage is a time-consuming but necessary step in the diagnosis and treatment of sleep disorders, and its automation has been an area of active study. The previous works have shown that low dimensional fast Fourier transform (FFT) features and many machine learning algorithms have been applied. In this paper, we demonstrate utilization of features extracted from EEG signals via FFT to improve the performance of automated sleep stage classification through machine learning methods. Unlike previous works using FFT, we incorporated thousands of FFT features in order to classify the sleep stages into 2–6 classes. Using the expanded version of Sleep-EDF dataset with 61 recordings, our method outperformed other state-of-the art methods. This result indicates that high dimensional FFT features in combination with a simple feature selection is effective for the improvement of automated sleep stage classification.


2021 ◽  
Vol 13 (11) ◽  
pp. 2135
Author(s):  
Jesús Balado ◽  
Pedro Arias ◽  
Henrique Lorenzo ◽  
Adrián Meijide-Rodríguez

Mobile Laser Scanning (MLS) systems have proven their usefulness in the rapid and accurate acquisition of the urban environment. From the generated point clouds, street furniture can be extracted and classified without manual intervention. However, this process of acquisition and classification is not error-free, caused mainly by disturbances. This paper analyses the effect of three disturbances (point density variation, ambient noise, and occlusions) on the classification of urban objects in point clouds. From point clouds acquired in real case studies, synthetic disturbances are generated and added. The point density reduction is generated by downsampling in a voxel-wise distribution. The ambient noise is generated as random points within the bounding box of the object, and the occlusion is generated by eliminating points contained in a sphere. Samples with disturbances are classified by a pre-trained Convolutional Neural Network (CNN). The results showed different behaviours for each disturbance: density reduction affected objects depending on the object shape and dimensions, ambient noise depending on the volume of the object, while occlusions depended on their size and location. Finally, the CNN was re-trained with a percentage of synthetic samples with disturbances. An improvement in the performance of 10–40% was reported except for occlusions with a radius larger than 1 m.


2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


Author(s):  
Y. Xu ◽  
S. Tuttas ◽  
L. Heogner ◽  
U. Stilla

This paper presents an approach for the classification of photogrammetric point clouds of scaffolding components in a construction site, aiming at making a preparation for the automatic monitoring of construction site by reconstructing an as-built Building Information Model (as-built BIM). The points belonging to tubes and toeboards of scaffolds will be distinguished via subspace clustering process and principal components analysis (PCA) algorithm. The overall workflow includes four essential processing steps. Initially, the spherical support region of each point is selected. In the second step, the normalized cut algorithm based on spectral clustering theory is introduced for the subspace clustering, so as to select suitable subspace clusters of points and avoid outliers. Then, in the third step, the feature of each point is calculated by measuring distances between points and the plane of local reference frame defined by PCA in cluster. Finally, the types of points are distinguished and labelled through a supervised classification method, with random forest algorithm used. The effectiveness and applicability of the proposed steps are investigated in both simulated test data and real scenario. The results obtained by the two experiments reveal that the proposed approaches are qualified to the classification of points belonging to linear shape objects having different shapes of sections. For the tests using synthetic point cloud, the classification accuracy can reach 80%, with the condition contaminated by noise and outliers. For the application in real scenario, our method can also achieve a classification accuracy of better than 63%, without using any information about the normal vector of local surface.


Author(s):  
M. Lemmens

<p><strong>Abstract.</strong> A knowledge-based system exploits the knowledge, which a human expert uses for completing a complex task, through a database containing decision rules, and an inference engine. Already in the early nineties knowledge-based systems have been proposed for automated image classification. Lack of success faded out initial interest and enthusiasm, the same fate neural networks struck at that time. Today the latter enjoy a steady revival. This paper aims at demonstrating that a knowledge-based approach to automated classification of mobile laser scanning point clouds has promising prospects. An initial experiment exploiting only two features, height and reflectance value, resulted in an overall accuracy of 79<span class="thinspace"></span>% for the Paris-rue-Madame point cloud bench mark data set.</p>


Sign in / Sign up

Export Citation Format

Share Document