scholarly journals The GeoHashTree: a multi-resolution data structure for the management of point clouds

2014 ◽  
Author(s):  
N Sabo ◽  
A Beaulieu ◽  
D Bélanger ◽  
Y Belzile ◽  
B Piché
2011 ◽  
Vol 48-49 ◽  
pp. 21-24 ◽  
Author(s):  
Xian Ping Fu ◽  
Sheng Long Liao

As the electronic industry advances rapidly toward automatic manufacturing smaller, faster, and cheaper products, computer vision play more important role in IC packaging technology than before. One of the important tasks of computer vision is finding target position through similarity matching. Similarity matching requires distance computation of feature vectors for each target image. In this paper we propose a projection transform of wavelet coefficient based multi resolution data-structure algorithm for faster template matching, a position sequence of local sharp variation points in such signals is recorded as features. The proposed approach reduces the number of computation by around 70% over multi resolution data structure algorithm. We use the proposed approach to match similarity between wavelet parameters histograms for image matching. It is noticeable that the proposed fast algorithm provides not only the same retrieval results as the exhaustive search, but also a faster searching ability than existing fast algorithms. The proposed approach can be easily combined with existing algorithms for further performance enhancement.


2020 ◽  
Author(s):  
Johannes Leinauer ◽  
Benjamin Jacobs ◽  
Michael Krautblatter

<p>Costs for (re)installation and maintenance of protective structures are increasing while alpine hazards progressively threaten alpine communities, infrastructure and economics. With climatic changes, anticipation and clever early warning of rock slope failures based on the process dynamics become more and more important. The imminent rock slope failure at the Hochvogel summit (2592 m a.s.l., Allgäu Alps) offers a rare possibility to study a cliff fall at a high alpine carbonate peak during its preparation and until failure. In this real case scenario, we can develop and test an operative and effective early warning system.</p><p>The main cleft is two to six metres wide at the summit and at least 60 metres deep at the sides. Several lateral cracks are opening at faster pace and separate different instable blocks. 3D-UAV point clouds reveal a potentially failing mass of 260,000 m³ in six subunits. However, the pre-deformation is yet not pronounced enough to decide on the expected volume. Analysis of historical ortho- and aerial images yields an elongation of the main crack length from 10 to 35 m from 1960 until now. Discontinuous tape extensometer measurements show 35 cm opening of the main cleft between 2014 and 2020 with movement rates up to 1 cm/month. Since July 2018, automatic vibrating wire gauges deliver high-resolution data to an online server. In October 2019, we transferred the system into LoRa with data transmission every 10 min. Automatic warnings via SMS and email are triggered when crossing specific thresholds.</p><p>Here we demonstrate long-term process dynamics and 2-years of high-resolution data of a preparing alpine rock slope failure. Corresponding geodetic, photogrammetric, seismic and gravimetric measurements complete the comprehensive measurement design at the Hochvogel. This will help to decipher anticipative signals of initiating alpine rock slope failures and improve future event predictions.</p>


2016 ◽  
Vol 4 (3) ◽  
pp. 627-653 ◽  
Author(s):  
Stuart W. D. Grieve ◽  
Simon M. Mudd ◽  
David T. Milodowski ◽  
Fiona J. Clubb ◽  
David J. Furbish

Abstract. In many locations, our ability to study the processes which shape the Earth are greatly enhanced through the use of high-resolution digital topographic data. However, although the availability of such datasets has markedly increased in recent years, many locations of significant geomorphic interest still do not have high-resolution topographic data available. Here, we aim to constrain how well we can understand surface processes through topographic analysis performed on lower-resolution data. We generate digital elevation models from point clouds at a range of grid resolutions from 1 to 30 m, which covers the range of widely used data resolutions available globally, at three locations in the United States. Using these data, the relationship between curvature and grid resolution is explored, alongside the estimation of the hillslope sediment transport coefficient (D, in m2 yr−1) for each landscape. Curvature, and consequently D, values are shown to be generally insensitive to grid resolution, particularly in landscapes with broad hilltops and valleys. Curvature distributions, however, become increasingly condensed around the mean, and theoretical considerations suggest caution should be used when extracting curvature from landscapes with sharp ridges. The sensitivity of curvature and topographic gradient to grid resolution are also explored through analysis of one-dimensional approximations of curvature and gradient, providing a theoretical basis for the results generated using two-dimensional topographic data. Two methods of extracting channels from topographic data are tested. A geometric method of channel extraction that finds channels by detecting threshold values of planform curvature is shown to perform well at resolutions up to 30 m in all three landscapes. The landscape parameters of hillslope length and relief are both successfully extracted at the same range of resolutions. These parameters can be used to detect landscape transience and our results suggest that such work need not be confined to high-resolution topographic data. A synthesis of the results presented in this work indicates that although high-resolution (e.g., 1 m) topographic data do yield exciting possibilities for geomorphic research, many key parameters can be understood in lower-resolution data, given careful consideration of how analyses are performed.


2016 ◽  
Author(s):  
Stuart W. D. Grieve ◽  
Simon M. Mudd ◽  
David T. Milodowski ◽  
Fiona J. Clubb ◽  
David J. Furbish

Abstract. In many locations, our ability to study the processes which shape the Earth are greatly enhanced through the use of high resolution digital topographic data. However, although the availability of such datasets has markedly increased in recent years, many locations of significant geomorphic interest still do not have high resolution topographic data available. Here, we aim to constrain how well we can understand surface processes through topographic analysis performed on lower resolution data. We generate digital elevation models from point clouds at a range of grid sizes from 1 to 30 m, which covers the range of widely used data resolutions available globally, at three locations in the United States. Using this data, the relationship between curvature and grid resolution is explored, alongside the estimation of the hillslope sediment transport coefficient (D, in m2 yr−1) for each landscape. Curvature, and consequently D, values are shown to be generally insensitive to grid resolution, particularly in landscapes with broad hilltops and valleys. Curvature distributions, however, become increasingly condensed around the mean, and theoretical considerations suggest caution should be used when extracting curvature from landscapes with sharp ridges. Two methods of extracting channels from topographic data are tested. A geometric method of channel extraction that finds channels by detecting threshold values of planform curvature is shown to perform well at resolutions up to 30 m in all three landscapes. The landscape parameters of hillslope length and relief are both successfully extracted at the same range of resolutions. These parameters can be used to detect landscape transience and our results suggest that such work need not be confined to high resolution topographic data. A synthesis of the results presented in this work indicate that although high resolution (e.g., 1 m) topographic data does yield exciting possibilities for geomorphic research, many key parameters can be understood in lower resolution data, given careful consideration of how analyses are performed.


Author(s):  
E. Guilbert ◽  
S. Jutras ◽  
T. Badard

<p><strong>Abstract.</strong> This paper addresses the problem of extracting the drainage network in forested areas. A precise description of the drainage network including intermittent streams is required for the planning of logging operations and environmental conservation. LiDAR provides now high-resolution point clouds from which the terrain is modelled and the drainage extracted but it also brings some challenges for traditional approaches. First, the raster DTM is interpolated from LiDAR ground points and has to be split in tiles for processing, adding approximations. Second, drainage enforcement techniques alter the terrain and rely on parameters difficult to fix and limiting the optimisation of the process. In this context, we discuss a new approach aiming at: (1) Designing a data structure to model the terrain with a Triangulated Irregular Network in order to avoid interpolation. This data structure must enable the distribution of data and processes across several nodes in Big data architectures and eventually, the processing of complete watersheds with no tiling. (2) Modelling the river network through thalwegs and avoiding the filling and breaching operations. Thalweg detection is more robust, removing the need for filling and breaching. However, it yields a very dense network requiring a simplification step. Combining this model and the architecture will enable the design and modelling of a new tool for river network computation directly from LiDAR ground points. In this paper, we mainly discuss the second point and propose to model the drainage by a network of thalwegs computed from the terrain. Thalwegs are extracted from the surface network, a topological structure formed of peaks, pits and saddles as vertices and ridges and thalwegs as vertices. We present preliminary results comparing the thalweg network and the drainage network.</p>


Author(s):  
K. Siangchaew ◽  
J. Bentley ◽  
M. Libera

Energy-filtered electron-spectroscopic TEM imaging provides a new way to study the microstructure of polymers without heavy-element stains. Since spectroscopic imaging exploits the signal generated directly by the electron-specimen interaction, it can produce richer and higher resolution data than possible with most staining methods. There are basically two ways to collect filtered images (fig. 1). Spectrum imaging uses a focused probe that is digitally rastered across a specimen with an entire energy-loss spectrum collected at each x-y pixel to produce a 3-D data set. Alternatively, filtering schemes such as the Zeiss Omega filter and the Gatan Imaging Filter (GIF) acquire individual 2-D images with electrons of a defined range of energy loss (δE) that typically is 5-20 eV.


Sign in / Sign up

Export Citation Format

Share Document