Complementary Part Detection and Reassembly of 3D Fragments

Author(s):  
Vandana Dixit Kaushik ◽  
Phalguni Gupta

This chapter presents an algorithm for identifying complementary site of objects broken into two parts. For a given 3D scanned image of broken objects, the algorithm identifies the rough sites of the broken object, transforms the object to a suitable alignment, registers it with its complementary part which belongs to the same object, and finds the local correspondence among the fragmented parts. The presented algorithm uses multiple granularity descriptors and edge extraction to detect the exact location of multiple cleavage sites in the object. It greatly reduces the amount of information to be matched and also helps in identification of the parts; as a result it reduces the computational time in the processing. It is also applicable to all triangulated surface data even in the presence of noise.

2013 ◽  
pp. 703-725
Author(s):  
Vandana Dixit Kaushik ◽  
Phalguni Gupta

This chapter presents an algorithm for identifying complementary site of objects broken into two parts. For a given 3D scanned image of broken objects, the algorithm identifies the rough sites of the broken object, transforms the object to a suitable alignment, registers it with its complementary part which belongs to the same object, and finds the local correspondence among the fragmented parts. The presented algorithm uses multiple granularity descriptors and edge extraction to detect the exact location of multiple cleavage sites in the object. It greatly reduces the amount of information to be matched and also helps in identification of the parts; as a result it reduces the computational time in the processing. It is also applicable to all triangulated surface data even in the presence of noise.


2012 ◽  
Vol 58 (212) ◽  
pp. 1151-1164 ◽  
Author(s):  
R.W. Mcnabb ◽  
R. Hock ◽  
S. O’Neel ◽  
L.A. Rasmussen ◽  
Y. Ahn ◽  
...  

AbstractInformation about glacier volume and ice thickness distribution is essential for many glaciological applications, but direct measurements of ice thickness can be difficult and costly. We present a new method that calculates ice thickness via an estimate of ice flux. We solve the familiar continuity equation between adjacent flowlines, which decreases the computational time required compared to a solution on the whole grid. We test the method on Columbia Glacier, a large tidewater glacier in Alaska, USA, and compare calculated and measured ice thicknesses, with favorable results. This shows the potential of this method for estimating ice thickness distribution of glaciers for which only surface data are available. We find that both the mean thickness and volume of Columbia Glacier were approximately halved over the period 1957–2007, from 281 m to 143 m, and from 294 km3 to 134 km3, respectively. Using bedrock slope and considering how waves of thickness change propagate through the glacier, we conduct a brief analysis of the instability of Columbia Glacier, which leads us to conclude that the rapid portion of the retreat may be nearing an end.


2015 ◽  
Author(s):  
Harald G Klimach ◽  
Jens Zudrop ◽  
Sabine P Roller

We propose a robust method to convert triangulated surface data into polynomial volume data. Such polynomial representations are required for high-order partial differential solvers, as low-order surface representations would diminish the accuracy of their solution. Our proposed method deploys a first order spatial bisection algorithm to find robustly an approximation of given geometries. The resulting voxelization is then used to generate Legendre polynomials of arbitrary degree. By embedding the locally defined polynomials in cubical elements of a coarser mesh, this method can reliably approximate even complex structures, like porous media. It thereby is possible to provide appropriate material definitions for high order discontinuous Galerkin schemes. We describe the method to construct the polynomial and how it fits into the overall mesh generation. Our discussion includes numerical properties of the method and we show some results from applying it to various geometries. We have implemented the described method in our mesh generator Seeder, which is publically available under a permissive open-source license.


2015 ◽  
Author(s):  
Harald G Klimach ◽  
Jens Zudrop ◽  
Sabine P Roller

We propose a robust method to convert triangulated surface data into polynomial volume data. Such polynomial representations are required for high-order partial differential solvers, as low-order surface representations would diminish the accuracy of their solution. Our proposed method deploys a first order spatial bisection algorithm to find robustly an approximation of given geometries. The resulting voxelization is then used to generate Legendre polynomials of arbitrary degree. By embedding the locally defined polynomials in cubical elements of a coarser mesh, this method can reliably approximate even complex structures, like porous media. It thereby is possible to provide appropriate material definitions for high order discontinuous Galerkin schemes. We describe the method to construct the polynomial and how it fits into the overall mesh generation. Our discussion includes numerical properties of the method and we show some results from applying it to various geometries. We have implemented the described method in our mesh generator Seeder, which is publically available under a permissive open-source license.


Author(s):  
L. FENG ◽  
C. Y. SUEN ◽  
Y. Y. TANG ◽  
L. H. YANG

This paper describes a novel method for edge feature detection of document images based on wavelet decomposition and reconstruction. By applying the wavelet decomposition technique, a document image becomes a wavelet representation, i.e. the image is decomposed into a set of wavelet approximation coefficients and wavelet detail coefficients. Discarding wavelet approximation, the edge extraction is implemented by means of the wavelet reconstruction technique. In consideration of the mutual frequency, overlapping will occur between wavelet approximation and wavelet details, a multiresolution-edge extraction with respect to an iterative reconstruction procedure is developed to ameliorate the quality of the reconstructed edges in this case. A novel combination of this multiresolution-edge results in clear final edges of the document images. This multi-resolution reconstruction procedure follows a coarser-to-finer searching strategy. The edge feature extraction is accompanied by an energy distribution estimation from which the levels of wavelet decomposition are adaptively controlled. Compared with the scheme of wavelet transform, our method does not incur any redundant operation. Therefore, the computational time and the memory requirement are less than those in wavelet transform.


2015 ◽  
Vol 1 ◽  
pp. e35
Author(s):  
Harald G. Klimach ◽  
Jens Zudrop ◽  
Sabine P. Roller

We propose a robust method to convert triangulated surface data into polynomial volume data. Such polynomial representations are required for high-order partial differential solvers, as low-order surface representations would diminish the accuracy of their solution. Our proposed method deploys a first order spatial bisection algorithm to find robustly an approximation of given geometries. The resulting voxelization is then used to generate Legendre polynomials of arbitrary degree. By embedding the locally defined polynomials in cubical elements of a coarser mesh, this method can reliably approximate even complex structures, like porous media. It thereby is possible to provide appropriate material definitions for high order discontinuous Galerkin schemes. We describe the method to construct the polynomial and how it fits into the overall mesh generation. Our discussion includes numerical properties of the method and we show some results from applying it to various geometries. We have implemented the described method in our mesh generatorSeeder, which is publically available under a permissive open-source license.


1966 ◽  
Vol 24 ◽  
pp. 188-189
Author(s):  
T. J. Deeming

If we make a set of measurements, such as narrow-band or multicolour photo-electric measurements, which are designed to improve a scheme of classification, and in particular if they are designed to extend the number of dimensions of classification, i.e. the number of classification parameters, then some important problems of analytical procedure arise. First, it is important not to reproduce the errors of the classification scheme which we are trying to improve. Second, when trying to extend the number of dimensions of classification we have little or nothing with which to test the validity of the new parameters.Problems similar to these have occurred in other areas of scientific research (notably psychology and education) and the branch of Statistics called Multivariate Analysis has been developed to deal with them. The techniques of this subject are largely unknown to astronomers, but, if carefully applied, they should at the very least ensure that the astronomer gets the maximum amount of information out of his data and does not waste his time looking for information which is not there. More optimistically, these techniques are potentially capable of indicating the number of classification parameters necessary and giving specific formulas for computing them, as well as pinpointing those particular measurements which are most crucial for determining the classification parameters.


1999 ◽  
Vol 173 ◽  
pp. 309-314 ◽  
Author(s):  
T. Fukushima

AbstractBy using the stability condition and general formulas developed by Fukushima (1998 = Paper I) we discovered that, just as in the case of the explicit symmetric multistep methods (Quinlan and Tremaine, 1990), when integrating orbital motions of celestial bodies, the implicit symmetric multistep methods used in the predictor-corrector manner lead to integration errors in position which grow linearly with the integration time if the stepsizes adopted are sufficiently small and if the number of corrections is sufficiently large, say two or three. We confirmed also that the symmetric methods (explicit or implicit) would produce the stepsize-dependent instabilities/resonances, which was discovered by A. Toomre in 1991 and confirmed by G.D. Quinlan for some high order explicit methods. Although the implicit methods require twice or more computational time for the same stepsize than the explicit symmetric ones do, they seem to be preferable since they reduce these undesirable features significantly.


Author(s):  
Hilton H. Mollenhauer

Many factors (e.g., resolution of microscope, type of tissue, and preparation of sample) affect electron microscopical images and alter the amount of information that can be retrieved from a specimen. Of interest in this report are those factors associated with the evaluation of epoxy embedded tissues. In this context, informational retrieval is dependant, in part, on the ability to “see” sample detail (e.g., contrast) and, in part, on tue quality of sample preservation. Two aspects of this problem will be discussed: 1) epoxy resins and their effect on image contrast, information retrieval, and sample preservation; and 2) the interaction between some stains commonly used for enhancing contrast and information retrieval.


Sign in / Sign up

Export Citation Format

Share Document