Feature Sensitive Mesh Reconstruction by Normal Vector Cone Filtering

Author(s):  
Ji Ma ◽  
Hsi-Yung Feng ◽  
Lihui Wang

Automatic and reliable reconstruction of sharp features remains an open research issue in triangle mesh surface reconstruction. This paper presents a new feature sensitive mesh reconstruction method based on dependable neighborhood geometric information per input point. Such information is derived from the matching result of the local umbrella mesh constructed at each point. The proposed algorithm is different from the existing post-processing algorithms. The proposed algorithm reconstructs the triangle mesh via an integrated and progressive reconstruction process and features a unified multi-level inheritance priority queuing mechanism to prioritize the inclusion of each candidate triangle. A novel flatness sensitive filter, referred to as the normal vector cone filter, is introduced in this work and used to reliably reconstruct sharp features. In addition, the proposed algorithm aims to reconstruct a watertight manifold triangle mesh that passes through the complete original point set without point addition and removal. The algorithm has been implemented and validated using publicly available point cloud data sets. Compared to the original object geometry, it is seen that the reconstructed triangle meshes preserve the sharp features well and only contain minor shape deviations.

Author(s):  
Michihiro Mikamo ◽  
Yoshinori Oki ◽  
Marco Visentini-Scarzanella ◽  
Hiroshi Kawasaki ◽  
Ryo Furukawa ◽  
...  

2021 ◽  
pp. 1-13
Author(s):  
Yikai Zhang ◽  
Yong Peng ◽  
Hongyu Bian ◽  
Yuan Ge ◽  
Feiwei Qin ◽  
...  

Concept factorization (CF) is an effective matrix factorization model which has been widely used in many applications. In CF, the linear combination of data points serves as the dictionary based on which CF can be performed in both the original feature space as well as the reproducible kernel Hilbert space (RKHS). The conventional CF treats each dimension of the feature vector equally during the data reconstruction process, which might violate the common sense that different features have different discriminative abilities and therefore contribute differently in pattern recognition. In this paper, we introduce an auto-weighting variable into the conventional CF objective function to adaptively learn the corresponding contributions of different features and propose a new model termed Auto-Weighted Concept Factorization (AWCF). In AWCF, on one hand, the feature importance can be quantitatively measured by the auto-weighting variable in which the features with better discriminative abilities are assigned larger weights; on the other hand, we can obtain more efficient data representation to depict its semantic information. The detailed optimization procedure to AWCF objective function is derived whose complexity and convergence are also analyzed. Experiments are conducted on both synthetic and representative benchmark data sets and the clustering results demonstrate the effectiveness of AWCF in comparison with the related models.


2003 ◽  
Vol 19 (1) ◽  
pp. 23-37 ◽  
Author(s):  
Yong-Jin Liu ◽  
Matthew Ming-Fai Yuen

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 263
Author(s):  
Munan Yuan ◽  
Xiru Li ◽  
Longle Cheng ◽  
Xiaofeng Li ◽  
Haibo Tan

Alignment is a critical aspect of point cloud data (PCD) processing, and we propose a coarse-to-fine registration method based on bipartite graph matching in this paper. After data pre-processing, the registration progress can be detailed as follows: Firstly, a top-tail (TT) strategy is designed to normalize and estimate the scale factor of two given PCD sets, which can combine with the coarse alignment process flexibly. Secondly, we utilize the 3D scale-invariant feature transform (3D SIFT) method to extract point features and adopt fast point feature histograms (FPFH) to describe corresponding feature points simultaneously. Thirdly, we construct a similarity weight matrix of the source and target point data sets with bipartite graph structure. Moreover, the similarity weight threshold is used to reject some bipartite graph matching error-point pairs, which determines the dependencies of two data sets and completes the coarse alignment process. Finally, we introduce the trimmed iterative closest point (TrICP) algorithm to perform fine registration. A series of extensive experiments have been conducted to validate that, compared with other algorithms based on ICP and several representative coarse-to-fine alignment methods, the registration accuracy and efficiency of our method are more stable and robust in various scenes and are especially more applicable with scale factors.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Ting Yun ◽  
Weizheng Li ◽  
Yuan Sun ◽  
Lianfeng Xue

In order to retrieve gap fraction, leaf inclination angle, and leaf area index (LAI) of subtropical forestry canopy, here we acquired forestry detailed information by means of hemispherical photography, terrestrial laser scanning, and LAI-2200 plant canopy analyzer. Meanwhile, we presented a series of image processing and computer graphics algorithms that include image and point cloud data (PCD) segmentation methods for branch and leaf classification and PCD features, such as normal vector, tangent plane extraction, and hemispherical projection method for PCD coordinate transformation. In addition, various forestry mathematical models were proposed to deduce forestry canopy indexes based on the radiation transfer model of Beer-Lambert law. Through the comparison of the experimental results on many plot samples, the terrestrial laser scanner- (TLS-) based index estimation method obtains results similar to digital hemispherical photograph (HP) and LAI-2200 plant canopy analyzer taken of the same stands and used for validation. It indicates that the TLS-based algorithm is able to capture the variability in LAI of forest stands with a range of densities, and there is a high chance to enhance TLS as a calibration tool for other devices.


Author(s):  
Carlos Goncalves ◽  
Luis Assuncao ◽  
Jose C. Cunha

Data analytics applications handle large data sets subject to multiple processing phases, some of which can execute in parallel on clusters, grids or clouds. Such applications can benefit from using MapReduce model, only requiring the end-user to define the application algorithms for input data processing and the map and reduce functions, but this poses a need to install/configure specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. In order to provide more flexibility in defining and adjusting the application configurations, as well as in the specification of the composition of the application phases and their orchestration, the authors describe an approach for supporting MapReduce stages as sub-workflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). The authors discuss how a text mining application is represented as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. Access to intermediate data produced during the MapReduce computations is supported by a data sharing abstraction. The authors describe two implementations of this abstraction, one based on a shared tuple space and another based on an in-memory distributed key/value store. The authors describe the implementation of the framework, a set of developed tools, and our experimentation with the execution of the text mining algorithm over multiple Amazon EC2 (Elastic Compute Cloud) instances, and report on the speed-up and size-up results obtained up to 20 EC2 instances and for different corpus sizes, up to 97 million words.


2019 ◽  
Vol 40 (2) ◽  
pp. 249-256
Author(s):  
Yaxin Peng ◽  
Naiwu Wen ◽  
Chaomin Shen ◽  
Xiaohuang Zhu ◽  
Shihui Ying

Purpose Partial alignment for 3 D point sets is a challenging problem for laser calibration and robot calibration due to the unbalance of data sets, especially when the overlap of data sets is low. Geometric features can promote the accuracy of alignment. However, the corresponding feature extraction methods are time consuming. The purpose of this paper is to find a framework for partial alignment by an adaptive trimmed strategy. Design/methodology/approach First, the authors propose an adaptive trimmed strategy based on point feature histograms (PFH) coding. Second, they obtain an initial transformation based on this partition, which improves the accuracy of the normal direction weighted trimmed iterative closest point (ICP) method. Third, they conduct a series of GPU parallel implementations for time efficiency. Findings The initial partition based on PFH feature improves the accuracy of the partial registration significantly. Moreover, the parallel GPU algorithms accelerate the alignment process. Research limitations/implications This study is applicable to rigid transformation so far. It could be extended to non-rigid transformation. Practical implications In practice, point set alignment for calibration is a technique widely used in the fields of aircraft assembly, industry examination, simultaneous localization and mapping and surgery navigation. Social implications Point set calibration is a building block in the field of intelligent manufacturing. Originality/value The contributions are as follows: first, the authors introduce a novel coarse alignment as an initial calibration by PFH descriptor similarity, which can be viewed as a coarse trimmed process by partitioning the data to the almost overlap part and the rest part; second, they reduce the computation time by GPU parallel coding during the acquisition of feature descriptor; finally, they use the weighted trimmed ICP method to refine the transformation.


2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Y. Zhang ◽  
B. P. Wang ◽  
Y. Fang ◽  
Z. X. Song

The existing sparse imaging observation error estimation methods are to usually estimate the error of each observation position by substituting the error parameters into the iterative reconstruction process, which has a huge calculation cost. In this paper, by analysing the relationship between imaging results of single-observation sampling data and error parameters, a SAR observation error estimation method based on maximum relative projection matching is proposed. First, the method estimates the precise position parameters of the reference position by the sparse reconstruction method of joint error parameters. Second, a relative error estimation model is constructed based on the maximum correlation of base-space projection. Finally, the accurate error parameters are estimated by the Broyden–Fletcher–Goldfarb–Shanno method. Simulation and measured data of microwave anechoic chambers show that, compared to the existing methods, the proposed method has higher estimation accuracy, lower noise sensitivity, and higher computational efficiency.


Sign in / Sign up

Export Citation Format

Share Document