scholarly journals Surface Meshes Incremental Decimation Framework

2008 ◽  
Author(s):  
Arnaud Gelas ◽  
Alexandre Gouaillard ◽  
Sean Megason

When dealing with meshes, it is often preferable to work with a lower resolution mesh for computational time purpose, display. The process of reducing a given mesh, mesh decimation, is thus an important step in most of pipeline dealing with meshes. Incremental decimation algorithms, the most popular ones, consists of iteratively removing one point of the mesh, by Euler operations such as vertex removal or edge collapse. Here we focus on edge collapse based decimation approaches and propose a general framework based on a surface mesh data structure (itk::QuadEdgeMesh [3]). Our implementation intends to be as general and as flexible as possible. Indeed it can theoretically be applied on any polygonal mesh1; the measure, functional to be optimized at each iteration, the objective to be reached, and optional methods like point relocation to enhance the geometry of the resulting mesh, are given by the user. We provide here two specific implementations: itk::QuadEdgeMeshSquaredEdgeLengthDecimation and itk::QuadEdgeMeshQuadricDecimation, that could be used as example to implement additional algorithms.

2005 ◽  
Vol 21 (2) ◽  
pp. 91-100 ◽  
Author(s):  
Irina Semenova ◽  
Nikita Kozhekin ◽  
Vladimir Savchenko ◽  
Ichiro Hagiwara

2009 ◽  
Author(s):  
Arnaud Gelas ◽  
Alexandre Gouaillard ◽  
sean megason

This paper describes the implementation of a surface smoothing filter in ITK, based ont he Quad Edge Mesh surface data structure.


Author(s):  
Thomas Backhaus ◽  
Thomas Maywald ◽  
Sven Schrape ◽  
Matthias Voigt ◽  
Ronald Mailach

This paper will present a way to capture the geometric blade by blade variations of a milled from solid blisk as well as the manufacturing scatter. Within this idea it is an essential task to digitize the relevant airfoil surface as good as possible to create a valid surface mesh as the base of the upcoming evaluation tasks. Since those huge surface meshes are not easy to handle and are even worse in getting quantified and easy interpretable results, it should be aimed for an easily accessible way of presenting the geometric variation. The presented idea uses a section based airfoil parametrization that is based on an extended NACA-airfoil structure to ensure the capturing of all occurring characteristic geometry variations. This Paper will show how this adapted parametrization method is suitable to outline all the geometric blade by blade variation and even more, refer those airfoil design parameters to modal analysis results such as the natural frequencies of the main mode shapes. This way, the dependencies between the modal and airfoil parameters will be proven.


Clustering is one of the relevant knowledge engineering methods of data analysis. The clustering method will automatically directly affect the result dataset. The proposed work aims at developing an Extended Advanced Method of Clustering (EAMC) to address numerous types of issues associated to large and high dimensional dataset. The proposed Extended Advance Method of clustering will repetitively avoid computational time between each data cluster object contained by the cluster that saves execution time in term. For each iteration EAMC needs a data structure to store data that can be utilized for the next iteration. We have gained outcomes from the proposed method, which demonstrates that there is an improvement in effectiveness and pace of clustering and precision generation, which will decrease the convolution of computing over the old algorithms like SOM, HAC, and K-means. This paper includes EAMC and the investigational outcomes done using academic datasets


2008 ◽  
Author(s):  
Arnaud Gelas ◽  
Alexandre Gouaillard ◽  
Sean Megason

Computing local curvatures of a given surface is important for applications, shape analysis, surface segmentation, meshing, and surface evolution. For a given smooth surface (with a given analytical expression which is sufficiently differentiable) curvatures can be analytically and directly computed. However in real applications, one often deals with a surface mesh which is an insufficiently differentiable approximation, and thus curvatures must be estimated. Based on a surface mesh data structure (~), we introduce and implement curvature estimators following the approach of Meyer. We show on a sphere that this method results in more stable curvature approximations than the commonly used discrete estimators (as used in VTK: ).


2004 ◽  
Vol 14 (06) ◽  
pp. 379-402 ◽  
Author(s):  
XIANGMIN JIAO ◽  
MICHAEL T. HEATH

We describe an efficient and robust algorithm for computing a common refinement of two meshes modeling the same surface of arbitrary shape by overlaying them on top of each other. A common refinement is an important data structure for transferring data between meshes that have different combinatorial structures. Our algorithm is optimal in time and space, with linear complexity, and is robust even with inexact computations, through the techniques of error analysis, detection of topological inconsistencies, and automatic resolution of such inconsistencies. We present the verification and some further enhancement of robustness in Part II.


2007 ◽  
Vol 07 (02) ◽  
pp. 273-290
Author(s):  
YOTAM LIVNY ◽  
NETA SOKOLOVSKY ◽  
JIHAD EL-SANA

The recent increase in the generated polygonal dataset sizes has outpaced the performance of graphics hardware. Several solutions such as multiresolution hierarchies and level-of-detail rendering have been developed to bridge the increasing gap. However, the discrete levels of detail generate annoying popping effects, the preliminaries multiresolution schemes cannot perform drastic changes on the selected level of detail within the span of small number of frames, and the current cluster-based hierarchies suffer from the high-detailed representation of the boundaries between clusters. In this paper, we are presenting a novel approach for multiresolution hierarchy that supports dual paths for run-time adaptive simplification — fine and coarse. The proposed multiresolution hierarchy is based on the fan-merge operator and its reverse operator fan-split. The coarse simplification path is achieved by directly applying fan-merge/split, while the fine simplification route is performed by executing edge-collapse/vertex-split one at a time. The sequence of the edge-collapses/vertex-splits is encoded implicitly by the order of the children participating in the fan-merge/split operator. We shall refer to this multiresolution hierarchy as fan-hierarchy. Fan-hierarchy provides a compact data structure for multiresolution hierarchy, since it stores 7/6 pointers, on the average, instead of 3 pointers for each node. In addition, the resulting depth of the fan-hierarchy is usually smaller than the depth of hierarchies generated by edge-collapse based multiresolution schemes. It is also important to note that fan-hierarchy inherently utilizes fan representation for further acceleration of the rendering process.


2021 ◽  
Vol 12 (2) ◽  
pp. 207-219
Author(s):  
Sergei Sergeevich Shumilin

In numerical modeling tasks that use surface meshes, remeshing is often required. However, while remeshing, distortion can occur. The accumulation of distortions can lead to the collapse of the solution. Smoothing algorithms are used to maintain the quality of the mesh during the calculation. When performing smoothing using methods that shift the mesh nodes, the border nodes are usually fixed to avoid distortion. However, simply fixing the nodes can lead to more severe distortion. This paper presents methods for working with boundary nodes to control such nodes during the smoothing process. Algorithms for working with pseudo-3D surface meshes, which are of particular interest, are also considered.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2914
Author(s):  
Roman Novak ◽  
Andrej Hrovat ◽  
Michael D. Bedford ◽  
Tomaž Javornik

Natural caves show some similarities to human-made tunnels, which have previously been the subject of radio-frequency propagation modelling using deterministic ray-tracing techniques. Since natural caves are non-uniform because of their inherent concavity and irregular limestone formations, detailed 3D models contain a large number of small facets, which can have a detrimental impact on the ray-tracing computational complexity as well as on the modelling accuracy. Here, we analyse the performance of ray tracing in repeatedly simplified 3D descriptions of two caves in the UK, i.e., Kingsdale Master Cave (KMC) Roof Tunnel and Skirwith Cave. The trade-off between the size of the reflection surface and the modelling accuracy is examined. Further, by reducing the number of facets, simulation time can be reduced significantly. Two simplification methods from computer graphics were applied: Vertex Clustering and Quadric Edge Collapse. We compare the ray-tracing results to the experimental measurements and to the channel modelling based on the modal theory. We show Edge Collapse to be better suited for the task than Vertex Clustering, with larger simplifications being possible before the passage becomes entirely blocked. The use of model simplification is predominantly justified by the computational time gains, with the acceptable simplified geometries roughly halving the execution time given the laser scanning resolution of 10 cm.


2008 ◽  
Author(s):  
Arnaud Gelas ◽  
Alexandre Gouaillard ◽  
Sean Megason

We have previously developed a new surface mesh data structure in itk (~). In this document we describe a new filter () to estimate normals for a given triangular surface mesh in this data structure. Here we describe the implementation and use of this filter for calculating normals of a .


Sign in / Sign up

Export Citation Format

Share Document