Triangulation Method of 3D Scattered Data Points Based on CAD Model

2011 ◽  
Vol 52-54 ◽  
pp. 139-143
Author(s):  
Shao Ke Chen ◽  
Hui Qun Chen

A new method of triangulation for large scale scattered 3D points is proposed. This method is based on the available CAD model, with the thought of DC(divide and conquer).Alignment between the data and the CAD model, registration which establishes correspondence between the data points and those on the CAD trimmed NURBS surface entities;2D-Delaunay triangulation, performed on the corresponding points in the parametric domains(u, v)of each entity, application of the connectivity structure to the 3D data points for each mesh patch; Elimination of redundant triangles of each 3D mesh patch and stitching of patches together. Unlike many other methods, it is not constrained by certain types of measurement distribution or object shape. The experimental results testify that the approach is feasible and efficient.

2021 ◽  
Vol 16 (4) ◽  
pp. 579-587
Author(s):  
Pitisit Dillon ◽  
Pakinee Aimmanee ◽  
Akihiko Wakai ◽  
Go Sato ◽  
Hoang Viet Hung ◽  
...  

The density-based spatial clustering of applications with noise (DBSCAN) algorithm is a well-known algorithm for spatial-clustering data point clouds. It can be applied to many applications, such as crack detection, rockfall detection, and glacier movement detection. Traditional DBSCAN requires two predefined parameters. Suitable values of these parameters depend upon the distribution of the input point cloud. Therefore, estimating these parameters is challenging. This paper proposed a new version of DBSCAN that can automatically customize the parameters. The proposed method consists of two processes: initial parameter estimation based on grid analysis and DBSCAN based on the divide-and-conquer (DC-DBSCAN) approach, which repeatedly performs DBSCAN on each cluster separately and recursively. To verify the proposed method, we applied it to a 3D point cloud dataset that was used to analyze rockfall events at the Puiggcercos cliff, Spain. The total number of data points used in this study was 15,567. The experimental results show that the proposed method is better than the traditional DBSCAN in terms of purity and NMI scores. The purity scores of the proposed method and the traditional DBSCAN method were 96.22% and 91.09%, respectively. The NMI scores of the proposed method and the traditional DBSCAN method are 0.78 and 0.49, respectively. Also, it can detect events that traditional DBSCAN cannot detect.


2018 ◽  
Vol 225 ◽  
pp. 06023 ◽  
Author(s):  
Samsul Ariffin Bin Abdul Karim ◽  
Azizan Saaban

Scattered data technique is important to visualize the geometrical images of the surface data especially for terrain, earthquake, geochemical distribution, rainfall etc. The main objective of this study is to visualize the terrain data by using cubic Ball triangular patches. First step, the terrain data is triangulated by using Delaunay triangulation. Then partial derivative will be estimated at the data points. Sufficient condition for C1 continuity will be derived for each triangle. Finally, a convex combination comprising three rational local scheme is used to construct the surface. The scheme is tested to visualize the terrain data collected at central region of Malaysia.


Author(s):  
Y. Z. Wang ◽  
Y. H. Chen

Abstract In this paper, a novel method is proposed for the prototyping of digitized data by means of rapid prototyping technologies without constructing a CAD model. Firstly, an optimized STL file (the de facto file format for rapid prototyping machines) is constructed directly from digitized part data. In order to reduce storage space and increase computational efficiency for subsequent processes such as slicing, significant data reduction can be achieved at users’ discretion by deleting data points in planar and near planar regions. Points around the ‘blank region’ left by deleted triangles are linked through re-triangulation to form triangular facets obeying STL file rules. To obtain optimized re-triangulation result, a genetic algorithm (GA) is developed and implemented. Finally, experiments on different amount of data reduction over a digitized sample are conducted with satisfactory results.


Author(s):  
S. A. M. Ariff ◽  
S. Azri ◽  
U. Ujang ◽  
A. A. M. Nasir ◽  
N. Ahmad Fuad ◽  
...  

Abstract. The current trends of 3D scanning technologies allow us to acquire accurate 3D data of large-scale environment efficiently. The 3D data of large-scale environments is essential when generating 3D model is for the visualization of smart cities. For the seamless visualization of 3D model, large data size will be used during the 3D data acquisition. However, the processing time for large data size is time consuming and requires suitable hardware specification. In this study, different hardware capability in processing large data of 3D point cloud for mesh generation is investigated. Light Detection and Ranging (LiDAR) Airborne and Mobile Mapping System (MMS) are used as data input and processed using Bentley ContextCapture software. The study is conducted in Malaysia, specifically in Wilayah Persekutuan Kuala Lumpur and Selangor with the size of 49 km2. Several analyses have been performed to analyse the software and hardware specification based on the 3D mesh model generated. From the finding, we have suggested the most suitable hardware specification for 3D mesh model generation.


2020 ◽  
Vol 34 (04) ◽  
pp. 6696-6703
Author(s):  
Rong Yin ◽  
Yong Liu ◽  
Lijing Lu ◽  
Weiping Wang ◽  
Dan Meng

Kernel Regularized Least Squares (KRLS) is a fundamental learner in machine learning. However, due to the high time and space requirements, it has no capability to large scale scenarios. Therefore, we propose DC-NY, a novel algorithm that combines divide-and-conquer method, Nyström, conjugate gradient, and preconditioning to scale up KRLS, has the same accuracy of exact KRLS and the minimum time and space complexity compared to the state-of-the-art approximate KRLS estimates. We present a theoretical analysis of DC-NY, including a novel error decomposition with the optimal statistical accuracy guarantees. Extensive experimental results on several real-world large-scale datasets containing up to 1M data points show that DC-NY significantly outperforms the state-of-the-art approximate KRLS estimates.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Jianying Yuan ◽  
Qiong Wang ◽  
Xiaoliang Jiang ◽  
Bailin Li

The multiview 3D data registration precision will decrease with the increasing number of registrations when measuring a large scale object using structured light scanning. In this paper, we propose a high-precision registration method based on multiple view geometry theory in order to solve this problem. First, a multiview network is constructed during the scanning process. The bundle adjustment method from digital close range photogrammetry is used to optimize the multiview network to obtain high-precision global control points. After that, the 3D data under each local coordinate of each scan are registered with the global control points. The method overcomes the error accumulation in the traditional registration process and reduces the time consumption of the following 3D data global optimization. The multiview 3D scan registration precision and efficiency are increased. Experiments verify the effectiveness of the proposed algorithm.


Diversity ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 109 ◽  
Author(s):  
Rebecca T. Kimball ◽  
Carl H. Oliveros ◽  
Ning Wang ◽  
Noor D. White ◽  
F. Keith Barker ◽  
...  

It has long been appreciated that analyses of genomic data (e.g., whole genome sequencing or sequence capture) have the potential to reveal the tree of life, but it remains challenging to move from sequence data to a clear understanding of evolutionary history, in part due to the computational challenges of phylogenetic estimation using genome-scale data. Supertree methods solve that challenge because they facilitate a divide-and-conquer approach for large-scale phylogeny inference by integrating smaller subtrees in a computationally efficient manner. Here, we combined information from sequence capture and whole-genome phylogenies using supertree methods. However, the available phylogenomic trees had limited overlap so we used taxon-rich (but not phylogenomic) megaphylogenies to weave them together. This allowed us to construct a phylogenomic supertree, with support values, that included 707 bird species (~7% of avian species diversity). We estimated branch lengths using mitochondrial sequence data and we used these branch lengths to estimate divergence times. Our time-calibrated supertree supports radiation of all three major avian clades (Palaeognathae, Galloanseres, and Neoaves) near the Cretaceous-Paleogene (K-Pg) boundary. The approach we used will permit the continued addition of taxa to this supertree as new phylogenomic data are published, and it could be applied to other taxa as well.


2019 ◽  
Vol 35 (14) ◽  
pp. i417-i426 ◽  
Author(s):  
Erin K Molloy ◽  
Tandy Warnow

Abstract Motivation At RECOMB-CG 2018, we presented NJMerge and showed that it could be used within a divide-and-conquer framework to scale computationally intensive methods for species tree estimation to larger datasets. However, NJMerge has two significant limitations: it can fail to return a tree and, when used within the proposed divide-and-conquer framework, has O(n5) running time for datasets with n species. Results Here we present a new method called ‘TreeMerge’ that improves on NJMerge in two ways: it is guaranteed to return a tree and it has dramatically faster running time within the same divide-and-conquer framework—only O(n2) time. We use a simulation study to evaluate TreeMerge in the context of multi-locus species tree estimation with two leading methods, ASTRAL-III and RAxML. We find that the divide-and-conquer framework using TreeMerge has a minor impact on species tree accuracy, dramatically reduces running time, and enables both ASTRAL-III and RAxML to complete on datasets (that they would otherwise fail on), when given 64 GB of memory and 48 h maximum running time. Thus, TreeMerge is a step toward a larger vision of enabling researchers with limited computational resources to perform large-scale species tree estimation, which we call Phylogenomics for All. Availability and implementation TreeMerge is publicly available on Github (http://github.com/ekmolloy/treemerge). Supplementary information Supplementary data are available at Bioinformatics online.


2017 ◽  
Vol 157 ◽  
pp. 190-205 ◽  
Author(s):  
Brojeshwar Bhowmick ◽  
Suvam Patra ◽  
Avishek Chatterjee ◽  
Venu Madhav Govindu ◽  
Subhashis Banerjee

Sign in / Sign up

Export Citation Format

Share Document