scholarly journals DETECTION OF GEOMETRIC KEYPOINTS AND ITS APPLICATION TO POINT CLOUD COARSE REGISTRATION

Author(s):  
M. Bueno ◽  
J. Martínez-Sánchez ◽  
H. González-Jorge ◽  
H. Lorenzo

Acquisition of large scale scenes, frequently, involves the storage of large amount of data, and also, the placement of several scan positions to obtain a complete object. This leads to a situation with a different coordinate system in each scan position. Thus, a preprocessing of it to obtain a common reference frame is usually needed before analysing it. Automatic point cloud registration without locating artificial markers is a challenging field of study. The registration of millions or billions of points is a demanding task. Subsampling the original data usually solves the situation, at the cost of reducing the precision of the final registration. In this work, a study of the subsampling via the detection of keypoints and its capability to apply in coarse alignment is performed. The keypoints obtained are based on geometric features of each individual point, and are extracted using the Difference of Gaussians approach over 3D data. The descriptors include features as eigenentropy, change of curvature and planarity. Experiments demonstrate that the coarse alignment, obtained through these keypoints outperforms the coarse registration root mean squared error of an operator by 3 - 5 cm. The applicability of these keypoints is tested and verified in five different case studies.

Author(s):  
M. Bueno ◽  
J. Martínez-Sánchez ◽  
H. González-Jorge ◽  
H. Lorenzo

Acquisition of large scale scenes, frequently, involves the storage of large amount of data, and also, the placement of several scan positions to obtain a complete object. This leads to a situation with a different coordinate system in each scan position. Thus, a preprocessing of it to obtain a common reference frame is usually needed before analysing it. Automatic point cloud registration without locating artificial markers is a challenging field of study. The registration of millions or billions of points is a demanding task. Subsampling the original data usually solves the situation, at the cost of reducing the precision of the final registration. In this work, a study of the subsampling via the detection of keypoints and its capability to apply in coarse alignment is performed. The keypoints obtained are based on geometric features of each individual point, and are extracted using the Difference of Gaussians approach over 3D data. The descriptors include features as eigenentropy, change of curvature and planarity. Experiments demonstrate that the coarse alignment, obtained through these keypoints outperforms the coarse registration root mean squared error of an operator by 3 - 5 cm. The applicability of these keypoints is tested and verified in five different case studies.


2013 ◽  
Vol 791-793 ◽  
pp. 1941-1944
Author(s):  
Ya Dan Zheng ◽  
Ming Ke Dong ◽  
Jian Jun Wu

CQI(Channel Quality Indicator) is an essential indicator for AMC(Adaptive Modulation and Coding) technique in LTE. Due to the long delay of GEO satellite channel, CQI prediction is necessary to ensure effective AMC. This paper proposes the approximation from real CQI data containing small scale fading to that containing only large scale fading to do prediction. The concrete correlation features and the difference between the approximation and the original data are all analyzed. Simulation is done for confirmation. It shows that the approximate large scale CQI data is feasible and rational for prediction and ensuring AMC efficiency.


Author(s):  
Guowei Cao ◽  
Zhiping Chen ◽  
Wenjing Guo

Large-scale oil tanks are being studied all along because they have a series of advantages. For example, they can reduce the cost of manufacturing and management of the facilities, and save land. So the volume of oil tanks becomes larger and larger during their development. However, without on-site heat treatment, the thickness of the shell of traditional oil tanks is restricted to 200,000 m3. In this paper, a new structure named Ultra-large Hydraulic-Balance oil tank with double-shell was put forward. With the method of hydraulic-balance, oil tanks of this structure could be larger than 200,000 m3. Besides expounding the working principle in detail, a 200,000 m3 oil tank with double-shell was also designed in the paper according to API 650, and the finite element model was used to analyze the stress including intensity and distribution of both shells in order to test and verify its security. Furthermore, its economy was analyzed by comparing with traditional oil tanks. Finally, the problem caused by the difference of liquid lever as well as was discussed. Results show that Ultra-large Hydraulic-Balance oil tank with double-shell owned advantages including rational construction, economy and easy manufacturing.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4860
Author(s):  
Zichao Shu ◽  
Songxiao Cao ◽  
Qing Jiang ◽  
Zhipeng Xu ◽  
Jianbin Tang ◽  
...  

In this paper, an optimized three-dimensional (3D) pairwise point cloud registration algorithm is proposed, which is used for flatness measurement based on a laser profilometer. The objective is to achieve a fast and accurate six-degrees-of-freedom (6-DoF) pose estimation of a large-scale planar point cloud to ensure that the flatness measurement is precise. To that end, the proposed algorithm extracts the boundary of the point cloud to obtain more effective feature descriptors of the keypoints. Then, it eliminates the invalid keypoints by neighborhood evaluation to obtain the initial matching point pairs. Thereafter, clustering combined with the geometric consistency constraints of correspondences is conducted to realize coarse registration. Finally, the iterative closest point (ICP) algorithm is used to complete fine registration based on the boundary point cloud. The experimental results demonstrate that the proposed algorithm is superior to the current algorithms in terms of boundary extraction and registration performance.


Author(s):  
F. Matrone ◽  
A. Lingua ◽  
R. Pierdicca ◽  
E. S. Malinverni ◽  
M. Paolanti ◽  
...  

Abstract. The lack of benchmarking data for the semantic segmentation of digital heritage scenarios is hampering the development of automatic classification solutions in this field. Heritage 3D data feature complex structures and uncommon classes that prevent the simple deployment of available methods developed in other fields and for other types of data. The semantic classification of heritage 3D data would support the community in better understanding and analysing digital twins, facilitate restoration and conservation work, etc. In this paper, we present the first benchmark with millions of manually labelled 3D points belonging to heritage scenarios, realised to facilitate the development, training, testing and evaluation of machine and deep learning methods and algorithms in the heritage field. The proposed benchmark, available at http://archdataset.polito.it/, comprises datasets and classification results for better comparisons and insights into the strengths and weaknesses of different machine and deep learning approaches for heritage point cloud semantic segmentation, in addition to promoting a form of crowdsourcing to enrich the already annotated database.


Author(s):  
S. A. M. Ariff ◽  
S. Azri ◽  
U. Ujang ◽  
A. A. M. Nasir ◽  
N. Ahmad Fuad ◽  
...  

Abstract. The current trends of 3D scanning technologies allow us to acquire accurate 3D data of large-scale environment efficiently. The 3D data of large-scale environments is essential when generating 3D model is for the visualization of smart cities. For the seamless visualization of 3D model, large data size will be used during the 3D data acquisition. However, the processing time for large data size is time consuming and requires suitable hardware specification. In this study, different hardware capability in processing large data of 3D point cloud for mesh generation is investigated. Light Detection and Ranging (LiDAR) Airborne and Mobile Mapping System (MMS) are used as data input and processed using Bentley ContextCapture software. The study is conducted in Malaysia, specifically in Wilayah Persekutuan Kuala Lumpur and Selangor with the size of 49 km2. Several analyses have been performed to analyse the software and hardware specification based on the 3D mesh model generated. From the finding, we have suggested the most suitable hardware specification for 3D mesh model generation.


Author(s):  
Y. Xu ◽  
R. Boerner ◽  
W. Yao ◽  
L. Hoegner ◽  
U. Stilla

For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.


2019 ◽  
Vol 3 (1) ◽  
pp. 46-59
Author(s):  
Kevin James Garstki ◽  
Chris Larkee ◽  
John LaDisa

As archaeologists continue to utilize digital 3D visualization technologies, instruction can also benefit from purpose-driven uses of these data. This paper outlines a pilot project that used previously captured 3D data in a large-scale immersive environment to supplement the instruction of basic archaeological concepts to an undergraduate introductory anthropology class. The flexibility of the platform allowed excavation trenches to be investigated in three-dimensions, enhancing the understanding of excavation methods and providing additional insight in the choices of the excavators. Additionally, virtual investigation of the artifacts provided a way for students to interact with objects on the other side of the world in a more complete way. Instructor-led immersive virtual experiences have significant potential to widen the interest in archaeology and enhance the instruction of archaeological concepts. They allow students to interact with the content, guided by an expert, and in the presence of each other. While the facilities are not available at every university at the current time, the cost effectiveness and ability to deliver these experiences via head-mounted displays represents an exciting potential extension for complementary self-paced, yet guided, exploration.  


2020 ◽  
Vol 12 (16) ◽  
pp. 2598
Author(s):  
Simone Teruggi ◽  
Eleonora Grilli ◽  
Michele Russo ◽  
Francesco Fassi ◽  
Fabio Remondino

The recent years saw an extensive use of 3D point cloud data for heritage documentation, valorisation and visualisation. Although rich in metric quality, these 3D data lack structured information such as semantics and hierarchy between parts. In this context, the introduction of point cloud classification methods can play an essential role for better data usage, model definition, analysis and conservation. The paper aims to extend a machine learning (ML) classification method with a multi-level and multi-resolution (MLMR) approach. The proposed MLMR approach improves the learning process and optimises 3D classification results through a hierarchical concept. The MLMR procedure is tested and evaluated on two large-scale and complex datasets: the Pomposa Abbey (Italy) and the Milan Cathedral (Italy). Classification results show the reliability and replicability of the developed method, allowing the identification of the necessary architectural classes at each geometric resolution.


Author(s):  
K. L. Navaneet ◽  
Priyanka Mandikal ◽  
Mayank Agarwal ◽  
R. Venkatesh Babu

Knowledge of 3D properties of objects is a necessity in order to build effective computer vision systems. However, lack of large scale 3D datasets can be a major constraint for datadriven approaches in learning such properties. We consider the task of single image 3D point cloud reconstruction, and aim to utilize multiple foreground masks as our supervisory data to alleviate the need for large scale 3D datasets. A novel differentiable projection module, called ‘CAPNet’, is introduced to obtain such 2D masks from a predicted 3D point cloud. The key idea is to model the projections as a continuous approximation of the points in the point cloud. To overcome the challenges of sparse projection maps, we propose a loss formulation termed ‘affinity loss’ to generate outlierfree reconstructions. We significantly outperform the existing projection based approaches on a large-scale synthetic dataset. We show the utility and generalizability of such a 2D supervised approach through experiments on a real-world dataset, where lack of 3D data can be a serious concern. To further enhance the reconstructions, we also propose a test stage optimization procedure to obtain reconstructions that display high correspondence with the observed input image.


Sign in / Sign up

Export Citation Format

Share Document