HM-ICP: Fast 3-D Registration Algorithm with Hierarchical and Region Selection Approach of M-ICP

2006 ◽  
Vol 18 (6) ◽  
pp. 765-771 ◽  
Author(s):  
Haruhisa Okuda ◽  
◽  
Yasuo Kitaaki ◽  
Manabu Hashimoto ◽  
Shun’ichi Kaneko ◽  
...  

This paper presents a novel fast and highly accurate 3-D registration algorithm. The ICP (Iterative Closest Point) algorithm converges all the 3-D data points of two data sets to the best-matching points with minimum evaluation values. This algorithm is in widespread use because it has good validity for many applications, but it extracts a heavy computational cost and is very sensitive to error. This is because it uses all the data points of two data sets and least mean square optimization. We previously proposed the M-ICP algorithm, which uses M-estimation to realize robustness against outlying gross noise with the original ICP algorithm. In this paper, we propose a novel algorithm called HM-ICP (Hierarchical M-ICP), which is an extension of the M-ICP that selects regions for matching and hierarchical searching of selected regions. This method selects regions by evaluating the variance of distance values in the target region, and homogeneous topological mapping. Some fundamental experiments using real data sets of 3-D measurement demonstrate the effectiveness of the proposed method, achieving a reduction of more than ten thousand times for computational costs. We also confirmed an error of less than 0.1% for the measurement distance.

2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


2018 ◽  
Vol 8 (2) ◽  
pp. 377-406
Author(s):  
Almog Lahav ◽  
Ronen Talmon ◽  
Yuval Kluger

Abstract A fundamental question in data analysis, machine learning and signal processing is how to compare between data points. The choice of the distance metric is specifically challenging for high-dimensional data sets, where the problem of meaningfulness is more prominent (e.g. the Euclidean distance between images). In this paper, we propose to exploit a property of high-dimensional data that is usually ignored, which is the structure stemming from the relationships between the coordinates. Specifically, we show that organizing similar coordinates in clusters can be exploited for the construction of the Mahalanobis distance between samples. When the observable samples are generated by a nonlinear transformation of hidden variables, the Mahalanobis distance allows the recovery of the Euclidean distances in the hidden space. We illustrate the advantage of our approach on a synthetic example where the discovery of clusters of correlated coordinates improves the estimation of the principal directions of the samples. Our method was applied to real data of gene expression for lung adenocarcinomas (lung cancer). By using the proposed metric we found a partition of subjects to risk groups with a good separation between their Kaplan–Meier survival plot.


Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V223-V232 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Sergey Fomel ◽  
Yangkang Chen

The seislet transform uses the wavelet-lifting scheme and local slopes to analyze the seismic data. In its definition, the designing of prediction operators specifically for seismic images and data is an important issue. We have developed a new formulation of the seislet transform based on the relative time (RT) attribute. This method uses the RT volume to construct multiscale prediction operators. With the new prediction operators, the seislet transform gets accelerated because distant traces get predicted directly. We apply our method to synthetic and real data to demonstrate that the new approach reduces computational cost and obtains excellent sparse representation on test data sets.


Author(s):  
Carlos A. P. Bengaly ◽  
Uendert Andrade ◽  
Jailson S. Alcaniz

Abstract We address the $$\simeq 4.4\sigma $$≃4.4σ tension between local and the CMB measurements of the Hubble Constant using simulated Type Ia Supernova (SN) data-sets. We probe its directional dependence by means of a hemispherical comparison through the entire celestial sphere as an estimator of the $$H_0$$H0 cosmic variance. We perform Monte Carlo simulations assuming isotropic and non-uniform distributions of data points, the latter coinciding with the real data. This allows us to incorporate observational features, such as the sample incompleteness, in our estimation. We obtain that this tension can be alleviated to $$3.4\sigma $$3.4σ for isotropic realizations, and $$2.7\sigma $$2.7σ for non-uniform ones. We also find that the $$H_0$$H0 variance is largely reduced if the data-sets are augmented to 4 and 10 times the current size. Future surveys will be able to tell whether the Hubble Constant tension happens due to unaccounted cosmic variance, or whether it is an actual indication of physics beyond the standard cosmological model.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. D625-D641 ◽  
Author(s):  
Dario Grana

The estimation of rock and fluid properties from seismic attributes is an inverse problem. Rock-physics modeling provides physical relations to link elastic and petrophysical variables. Most of these models are nonlinear; therefore, the inversion generally requires complex iterative optimization algorithms to estimate the reservoir model of petrophysical properties. We have developed a new approach based on the linearization of the rock-physics forward model using first-order Taylor series approximations. The mathematical method adopted for the inversion is the Bayesian approach previously applied successfully to amplitude variation with offset linearized inversion. We developed the analytical formulation of the linearized rock-physics relations for three different models: empirical, granular media, and inclusion models, and we derived the formulation of the Bayesian rock-physics inversion under Gaussian assumptions for the prior distribution of the model. The application of the inversion to real data sets delivers accurate results. The main advantage of this method is the small computational cost due to the analytical solution given by the linearization and the Bayesian Gaussian approach.


2020 ◽  
Vol 34 (04) ◽  
pp. 3211-3218
Author(s):  
Liang Bai ◽  
Jiye Liang

Due to the complex structure of the real-world data, nonlinearly separable clustering is one of popular and widely studied clustering problems. Currently, various types of algorithms, such as kernel k-means, spectral clustering and density clustering, have been developed to solve this problem. However, it is difficult for them to balance the efficiency and effectiveness of clustering, which limits their real applications. To get rid of the deficiency, we propose a three-level optimization model for nonlinearly separable clustering which divides the clustering problem into three sub-problems: a linearly separable clustering on the object set, a nonlinearly separable clustering on the cluster set and an ensemble clustering on the partition set. An iterative algorithm is proposed to solve the optimization problem. The proposed algorithm can use low computational cost to effectively recognize nonlinearly separable clusters. The performance of this algorithm has been studied on synthetical and real data sets. Comparisons with other nonlinearly separable clustering algorithms illustrate the efficiency and effectiveness of the proposed algorithm.


2012 ◽  
Vol 8 (4) ◽  
pp. 82-107 ◽  
Author(s):  
Renxia Wan ◽  
Yuelin Gao ◽  
Caixia Li

Up to now, several algorithms for clustering large data sets have been presented. Most clustering approaches for data sets are the crisp ones, which cannot be well suitable to the fuzzy case. In this paper, the authors explore a single pass approach to fuzzy possibilistic clustering over large data set. The basic idea of the proposed approach (weighted fuzzy-possibilistic c-means, WFPCM) is to use a modified possibilistic c-means (PCM) algorithm to cluster the weighted data points and centroids with one data segment as a unit. Experimental results on both synthetic and real data sets show that WFPCM can save significant memory usage when comparing with the fuzzy c-means (FCM) algorithm and the possibilistic c-means (PCM) algorithm. Furthermore, the proposed algorithm is of an excellent immunity to noise and can avoid splitting or merging the exact clusters into some inaccurate clusters, and ensures the integrity and purity of the natural classes.


2019 ◽  
Vol 8 (2) ◽  
pp. 159
Author(s):  
Morteza Marzjarani

Heteroscedasticity plays an important role in data analysis. In this article, this issue along with a few different approaches for handling heteroscedasticity are presented. First, an iterative weighted least square (IRLS) and an iterative feasible generalized least square (IFGLS) are deployed and proper weights for reducing heteroscedasticity are determined. Next, a new approach for handling heteroscedasticity is introduced. In this approach, through fitting a multiple linear regression (MLR) model or a general linear model (GLM) to a sufficiently large data set, the data is divided into two parts through the inspection of the residuals based on the results of testing for heteroscedasticity, or via simulations. The first part contains the records where the absolute values of the residuals could be assumed small enough to the point that heteroscedasticity would be ignorable. Under this assumption, the error variances are small and close to their neighboring points. Such error variances could be assumed known (but, not necessarily equal).The second or the remaining portion of the said data is categorized as heteroscedastic. Through real data sets, it is concluded that this approach reduces the number of unusual (such as influential) data points suggested for further inspection and more importantly, it will lowers the root MSE (RMSE) resulting in a more robust set of parameter estimates.


1997 ◽  
Vol 9 (8) ◽  
pp. 1805-1842 ◽  
Author(s):  
Marcelo Blatt ◽  
Shai Wiseman ◽  
Eytan Domany

We present a new approach to clustering, based on the physical properties of an inhomogeneous ferromagnet. No assumption is made regarding the underlying distribution of the data. We assign a Potts spin to each data point and introduce an interaction between neighboring points, whose strength is a decreasing function of the distance between the neighbors. This magnetic system exhibits three phases. At very low temperatures, it is completely ordered; all spins are aligned. At very high temperatures, the system does not exhibit any ordering, and in an intermediate regime, clusters of relatively strongly coupled spins become ordered, whereas different clusters remain uncorrelated. This intermediate phase is identified by a jump in the order parameters. The spin-spin correlation function is used to partition the spins and the corresponding data points into clusters. We demonstrate on three synthetic and three real data sets how the method works. Detailed comparison to the performance of other techniques clearly indicates the relative success of our method.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB113-WB120 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Gilles Lambaré

Wide-azimuth seismic data sets are generally acquired more sparsely than narrow-azimuth seismic data sets. This brings new challenges to seismic data regularization algorithms, which aim to reconstruct seismic data for regularly sampled acquisition geometries from seismic data recorded from irregularly sampled acquisition geometries. The Fourier-based seismic data regularization algorithm first estimates the spatial frequency content on an irregularly sampled input grid. Then, it reconstructs the seismic data on any desired grid. Three main difficulties arise in this process: the “spectral leakage” problem, the accurate estimation of Fourier components, and the effective antialiasing scheme used inside the algorithm. The antileakage Fourier transform algorithm can overcome the spectral leakage problem and handles aliased data. To generalize it to higher dimensions, we propose an area weighting scheme to accurately estimate the Fourier components. However, the computational cost dramatically increases with the sampling dimensions. A windowed Fourier transform reduces the computational cost in high-dimension applications but causes undersampling in wavenumber domain and introduces some artifacts, known as Gibbs phenomena. As a solution, we propose a wavenumber domain oversampling inversion scheme. The robustness and effectiveness of the proposed algorithm are demonstrated with some applications to both synthetic and real data examples.


Sign in / Sign up

Export Citation Format

Share Document