scholarly journals SPARSE POINT CLOUD FILTERING BASED ON COVARIANCE FEATURES

Author(s):  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> This work presents an extended photogrammetric pipeline aimed to improve 3D reconstruction results. Standard photogrammetric pipelines can produce noisy 3D data, especially when images are acquired with various sensors featuring different properties. In this paper, we propose an automatic filtering procedure based on some geometric features computed on the sparse point cloud created within the bundle adjustment phase. Bad 3D tie points and outliers are detected and removed, relying on micro and macro-clusters analyses. Clusters are built according to the prevalent dimensionality class (1D, 2D, 3D) assigned to low-entropy points, and corresponding to the main linear, planar o scatter local behaviour of the point cloud. While the macro-clusters analysis removes smallsized clusters and high-entropy points, in the micro-clusters investigation covariance features are used to verify the inner coherence of each point to the assigned class. Results on heritage scenarios are presented and discussed.</p>

2018 ◽  
Vol 12 (2) ◽  
pp. 169-185 ◽  
Author(s):  
Christoph Holst ◽  
Tomislav Medić ◽  
Heiner Kuhlmann

Abstract The ability to acquire rapid, dense and high quality 3D data has made terrestrial laser scanners (TLS) a desirable instrument for tasks demanding a high geometrical accuracy, such as geodetic deformation analyses. However, TLS measurements are influenced by systematic errors due to internal misalignments of the instrument. The resulting errors in the point cloud might exceed the magnitude of random errors. Hence, it is important to assure that the deformation analysis is not biased by these influences. In this study, we propose and evaluate several strategies for reducing the effect of TLS misalignments on deformation analyses. The strategies are based on the bundled in-situ self-calibration and on the exploitation of two-face measurements. The strategies are verified analyzing the deformation of the Onsala Space Observatory’s radio telescope’s main reflector. It is demonstrated that either two-face measurements as well as the in-situ calibration of the laser scanner in a bundle adjustment improve the results of deformation analysis. The best solution is gained by a combination of both strategies.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Jianying Yuan ◽  
Qiong Wang ◽  
Xiaoliang Jiang ◽  
Bailin Li

The multiview 3D data registration precision will decrease with the increasing number of registrations when measuring a large scale object using structured light scanning. In this paper, we propose a high-precision registration method based on multiple view geometry theory in order to solve this problem. First, a multiview network is constructed during the scanning process. The bundle adjustment method from digital close range photogrammetry is used to optimize the multiview network to obtain high-precision global control points. After that, the 3D data under each local coordinate of each scan are registered with the global control points. The method overcomes the error accumulation in the traditional registration process and reduces the time consumption of the following 3D data global optimization. The multiview 3D scan registration precision and efficiency are increased. Experiments verify the effectiveness of the proposed algorithm.


2021 ◽  
Author(s):  
David Dellong ◽  
Florent Le Courtois ◽  
Jean-Michel Boutonnier ◽  
Bazile G. Kinda

&lt;p&gt;Maps of underwater noise generated by shipping activity became a useful tool to support international regulations on marine environments. They are used to infer the risk of impact on biodiversity. Maps are performed by 1) computing the emitted noise levels from ships, 2) propagating the acoustic signal in the environment and 3) using localized measurements to validate the results. Because of mismatches in environmental data and a limited number of measurements, noise maps remain highly uncertain.&lt;/p&gt;&lt;p&gt;In this work, the uncertainty of the noise maps is investigated through the potential complexity of soundscape. The acoustic signal at each receiving cell is computed from the convolution of the source of the ships by the transmission losses of the environment. Complexity is mapped by computing Shannon's entropy of the transmission losses for each receiver. High entropy areas only reflect high shipping densities and favorable acoustic propagation properties of the local environment. Low entropy areas reflect: low shipping density and/or poor acoustic propagation properties. An area with high shipping densities and poor acoustic propagation properties will still have low entropy values.&lt;/p&gt;&lt;p&gt;Entropy maps allow classifying areas depending on their environmental features. Thus, scenarios of uncertainty are defined. Results highlight the necessity to consider the diversity of the environmental properties in support of the production of noise maps. The methodology could help in optimizing spatial and temporal resolution of map computations, as well as optimizing acoustic monitoring strategies.&lt;/p&gt;


Author(s):  
E. Grilli ◽  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>


2021 ◽  
Author(s):  
Massoud Kaviany

Abstract Heat is stored in quanta of kinetic and potential energies in matter. The temperature represents the equilibrium and exciting occupation (boson) of these energy conditions. Temporal and spatial temperature variations and heat transfer are associated with the kinetics of these equilibrium excitations. During energy-conversion (between electron and phonon systems), the occupancies deviate from equilibria, while holding atomic-scale, inelastic spectral energy transfer kinetics. Heat transfer physics reaches nonequilibrium energy excitations and kinetics among the principal carriers, phonon, electron (and holes and ions), fluid particle, and photon. This allows atomic-level tailoring of energetic materials and energy-conversion processes and their efficiencies. For example, modern thermal-electric harvesters have transformed broad-spectrum, high-entropy heat into a narrow spectrum of low-entropy emissions to efficiently generate thermal electricity. Phonoelectricity, in contrast, intervenes before a low-entropy population of nonequilibrium optical phonons becomes a high-entropy heat. In particular, the suggested phonovoltaic cell generates phonoelectricity by employing the nonequilibrium, low-entropy, and elevated temperature optical-phonon produced population–for example, by relaxing electrons, excited by an electric field. A phonovoltaic material has an ultra-narrow electronic bandgap, such that the hot optical-phonon population can relax by producing electron-hole pairs (and power) instead of multiple acoustic phonons (and entropy). Examples of these quanta and spectral heat transfer are reviewed, contemplating a prospect for education and research in this field.


2019 ◽  
Vol 11 (16) ◽  
pp. 1955 ◽  
Author(s):  
Markus Hillemann ◽  
Martin Weinmann ◽  
Markus S. Mueller ◽  
Boris Jutzi

Mobile Mapping is an efficient technology to acquire spatial data of the environment. The spatial data is fundamental for applications in crisis management, civil engineering or autonomous driving. The extrinsic calibration of the Mobile Mapping System is a decisive factor that affects the quality of the spatial data. Many existing extrinsic calibration approaches require the use of artificial targets in a time-consuming calibration procedure. Moreover, they are usually designed for a specific combination of sensors and are, thus, not universally applicable. We introduce a novel extrinsic self-calibration algorithm, which is fully automatic and completely data-driven. The fundamental assumption of the self-calibration is that the calibration parameters are estimated the best when the derived point cloud represents the real physical circumstances the best. The cost function we use to evaluate this is based on geometric features which rely on the 3D structure tensor derived from the local neighborhood of each point. We compare different cost functions based on geometric features and a cost function based on the Rényi quadratic entropy to evaluate the suitability for the self-calibration. Furthermore, we perform tests of the self-calibration on synthetic and two different real datasets. The real datasets differ in terms of the environment, the scale and the utilized sensors. We show that the self-calibration is able to extrinsically calibrate Mobile Mapping Systems with different combinations of mapping and pose estimation sensors such as a 2D laser scanner to a Motion Capture System and a 3D laser scanner to a stereo camera and ORB-SLAM2. For the first dataset, the parameters estimated by our self-calibration lead to a more accurate point cloud than two comparative approaches. For the second dataset, which has been acquired via a vehicle-based mobile mapping, our self-calibration achieves comparable results to a manually refined reference calibration, while it is universally applicable and fully automated.


Author(s):  
D. Graziosi ◽  
O. Nakagami ◽  
S. Kuma ◽  
A. Zaghetto ◽  
T. Suzuki ◽  
...  

Abstract This article presents an overview of the recent standardization activities for point cloud compression (PCC). A point cloud is a 3D data representation used in diverse applications associated with immersive media including virtual/augmented reality, immersive telepresence, autonomous driving and cultural heritage archival. The international standard body for media compression, also known as the Motion Picture Experts Group (MPEG), is planning to release in 2020 two PCC standard specifications: video-based PCC (V-CC) and geometry-based PCC (G-PCC). V-PCC and G-PCC will be part of the ISO/IEC 23090 series on the coded representation of immersive media content. In this paper, we provide a detailed description of both codec algorithms and their coding performances. Moreover, we will also discuss certain unique aspects of point cloud compression.


Sign in / Sign up

Export Citation Format

Share Document