Outlier removal method for the refinement of optical measured displacement field based on critical factor least squares and subdomain division

Author(s):  
Bo Wang ◽  
Chen Sun ◽  
Keming Zhang ◽  
Jubing Chen

Abstract As a representative type of outlier, the abnormal data in displacement measurement often inevitably occurred in full-field optical metrology and significantly affected the further evaluation, especially when calculating the strain field by differencing the displacement. In this study, an outlier removal method is proposed which can recognize and remove the abnormal data in optically measured displacement field. A iterative critical factor least squares algorithm (CFLS) is developed which distinguishes the distance between the data points and the least square plane to identify the outliers. A successive boundary point algorithm is proposed to divide the measurement domain to improve the applicability and effectiveness of the CFLS algorithm. The feasibility and precision of the proposed method are discussed in detail through simulations and experiments. Results show that the outliers are reliably recognized and the precision of the strain estimation is highly improved by using these methods.

2020 ◽  
Vol 17 (1) ◽  
pp. 87-94
Author(s):  
Ibrahim A. Naguib ◽  
Fatma F. Abdallah ◽  
Aml A. Emam ◽  
Eglal A. Abdelaleem

: Quantitative determination of pyridostigmine bromide in the presence of its two related substances; impurity A and impurity B was considered as a case study to construct the comparison. Introduction: Novel manipulations of the well-known classical least squares multivariate calibration model were explained in detail as a comparative analytical study in this research work. In addition to the application of plain classical least squares model, two preprocessing steps were tried, where prior to modeling with classical least squares, first derivatization and orthogonal projection to latent structures were applied to produce two novel manipulations of the classical least square-based model. Moreover, spectral residual augmented classical least squares model is included in the present comparative study. Methods: 3 factor 4 level design was implemented constructing a training set of 16 mixtures with different concentrations of the studied components. To investigate the predictive ability of the studied models; a test set consisting of 9 mixtures was constructed. Results: The key performance indicator of this comparative study was the root mean square error of prediction for the independent test set mixtures, where it was found 1.367 when classical least squares applied with no preprocessing method, 1.352 when first derivative data was implemented, 0.2100 when orthogonal projection to latent structures preprocessing method was applied and 0.2747 when spectral residual augmented classical least squares was performed. Conclusion: Coupling of classical least squares model with orthogonal projection to latent structures preprocessing method produced significant improvement of the predictive ability of it.


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


Author(s):  
Deepika Saini ◽  
Sanoj Kumar ◽  
Manoj K. Singh ◽  
Musrrat Ali

AbstractThe key job here in the presented work is to investigate the performance of Generalized Ant Colony Optimizer (GACO) model in order to evolve the shape of three dimensional free-form Non Uniform Rational B-Spline (NURBS) curve using stereo (two) views. GACO model is a blend of two well known meta-heuristic optimization algorithms known as Simple Ant Colony and Global Ant Colony Optimization algorithms. Basically, the work talks about the solution of NURBS-fitting based reconstruction process. Therefore, GACO model is used to optimize the NURBS parameters (control points and weights) by minimizing the weighted least-square errors between the data points and the fitted NURBS curve. The algorithm is applied by first assuming some pre-fixed values of NURBS parameters. The experiments clearly show that the optimization procedure is a better option in a case where good initial locations of parameters are selected. A detailed experimental analysis is given in support of our algorithm. The implemented error analysis shows that the proposed methodology perform better as compared to the conventional methods.


2013 ◽  
Vol 694-697 ◽  
pp. 2545-2549 ◽  
Author(s):  
Qian Wen Cheng ◽  
Lu Ben Zhang ◽  
Hong Hua Chen

The key point researched by many scholars in the field of surveying and mapping is how to use the given geodetic height H measured by GPS to obtain the normal height. Although many commonly-used fitting methods have solved many problems, they all value the pending parameters as the nonrandom variables. Figuring out the best valuations, according to the traditional least square principle, only considers its trend or randomness, which is theoretically incomprehensive and have limitations in practice. Therefore, a method is needed not only considers its trend but also takes randomness into account. This method is called the least squares collocation.


2012 ◽  
Vol 591-593 ◽  
pp. 850-853
Author(s):  
Huai Xing Wen ◽  
Yong Tao Yang

Drawing Dies meter A / D acquisition module will be collected from the mold hole contour data to draw a curve in Matlab. According to the mold pore structure characteristics of the curve, the initial cut-off point of each part of contour is determined and iteratived optimization to find the best cut-off point, use the least squares method for fitting piecewise linear and fitting optimization to find the function of the various parts of the curve function, finally calculate the pass parameters of drawing mode. Parameters obtained compare with the standard mold, both of errors are relatively small that prove the correctness of the algorithm. Also a complete algorithm flow of pass parameters is designed, it can fast and accurately measure the wire drawing die hole parameters.


Author(s):  
Stefan Hartmann ◽  
Rose Rogin Gilbert

AbstractIn this article, we follow a thorough matrix presentation of material parameter identification using a least-square approach, where the model is given by non-linear finite elements, and the experimental data is provided by both force data as well as full-field strain measurement data based on digital image correlation. First, the rigorous concept of semi-discretization for the direct problem is chosen, where—in the first step—the spatial discretization yields a large system of differential-algebraic equation (DAE-system). This is solved using a time-adaptive, high-order, singly diagonally-implicit Runge–Kutta method. Second, to study the fully analytical versus fully numerical determination of the sensitivities, required in a gradient-based optimization scheme, the force determination using the Lagrange-multiplier method and the strain computation must be provided explicitly. The consideration of the strains is necessary to circumvent the influence of rigid body motions occurring in the experimental data. This is done by applying an external strain determination tool which is based on the nodal displacements of the finite element program. Third, we apply the concept of local identifiability on the entire parameter identification procedure and show its influence on the choice of the parameters of the rate-type constitutive model. As a test example, a finite strain viscoelasticity model and biaxial tensile tests applied to a rubber-like material are chosen.


2013 ◽  
Vol 278-280 ◽  
pp. 1323-1326
Author(s):  
Yan Hua Yu ◽  
Li Xia Song ◽  
Kun Lun Zhang

Fuzzy linear regression has been extensively studied since its inception symbolized by the work of Tanaka et al. in 1982. As one of the main estimation methods, fuzzy least squares approach is appealing because it corresponds, to some extent, to the well known statistical regression analysis. In this article, a restricted least squares method is proposed to fit fuzzy linear models with crisp inputs and symmetric fuzzy output. The paper puts forward a kind of fuzzy linear regression model based on structured element, This model has precise input data and fuzzy output data, Gives the regression coefficient and the fuzzy degree function determination method by using the least square method, studies the imitation degree question between the observed value and the forecast value.


Transport ◽  
2011 ◽  
Vol 26 (2) ◽  
pp. 197-203 ◽  
Author(s):  
Yanrong Hu ◽  
Chong Wu ◽  
Hongjiu Liu

A support vector machine is a machine learning method based on the statistical learning theory and structural risk minimization. The support vector machine is a much better method than ever, because it may solve some actual problems in small samples, high dimension, nonlinear and local minima etc. The article utilizes the theory and method of support vector machine (SVM) regression and establishes the regressive model based on the least square support vector machine (LS-SVM). Through predicting passenger flow on Hangzhou highway in 2000–2008, the paper shows that the regressive model of LS-SVM has much higher accuracy and reliability of prediction, and therefore may effectively predict passenger flow on the highway. Santrauka Atraminių vektorių metodas (Support Vector Machine – SVM) yra skaičiuojamasis metodas, paremtas statistikos teorija, struktūriniu požiūriu mažinant riziką. SVM metodas, palyginti su kitais metodais, yra patikimesnis metodas, nes juo remiantis galima išspręsti realias problemas, esant įvairioms sąlygoms. Tyrimams naudojama SVM metodo regresijos teorija ir sukuriamas regresinis modelis, kuris grindžiamas mažiausių kvadratų atraminių vektorių metodu (Least Squares Support Vector Machine – LS-SVM). Straipsnio autoriai prognozuoja keleivių srautą Hangdžou (Kinija) greitkelyje 2000–2008 m. Gauti rezultatai rodo, kad regresinis LS-SVM modelis yra labai tikslus ir patikimas, todėl gali būti efektyviai taikomas keleivių srautams prognozuoti greitkeliuose. Резюме Метод опорных векторов (Support Vector Machine – SVM) – это набор аналогичных алгоритмов вида «обучение с учителем», использующихся для задач классификации и регрессионного анализа. Метод SVM принадлежит к семейству линейных классификаторов. Основная идея метода SVM заключается в переводе исходных векторов в пространство более высокой размерности и поиске разделяющей гиперплоскости с максимальным зазором в этом пространстве. Алгоритм работает в предположении, что чем больше разница или расстояние между параллельными гиперплоскостями, тем меньше будет средняя ошибка классификатора. В сравнении с другими методами метод SVM более надежен и позволяет решать проблемы с различными условиями. Для исследования был использован метод SVM и регрессионный анализ, затем создана регрессионная модель, основанная на методе опорных векторов с квадратичной функцией потерь (Least Squares Support Vector Machine – LS-SVM). Авторы прогнозировали пассажирский поток на автомагистрали Ханчжоу (Китай) в 2000–2008 гг. Полученные результаты показывают, что регрессионная модель LS-SVM является надежной и может быть применена для прогнозирования пассажирских потоков на других магистралях.


Author(s):  
Vassilios E. Theodoracatos ◽  
Vasudeva Bobba

Abstract In this paper an approach is presented for the generation of a NURBS (Non-Uniform Rational B-splines) surface from a large set of 3D data points. The main advantage of NURBS surface representation is the ability to analytically describe both, precise quadratic primitives and free-form curves and surfaces. An existing three dimensional laser-based vision system is used to obtain the spatial point coordinates of an object surface with respect to a global coordinate system. The least-squares approximation technique is applied in both the image and world space of the digitized physical object to calculate the homogeneous vector and the control net of the NURBS surface. A new non-uniform knot vectorization process is developed based on five data parametrization techniques including four existing techniques, viz., uniform, chord length, centripetal, and affine invariant angle and a new technique based on surface area developed in this study. Least-squares error distribution and surface interrogation are used to evaluate the quality of surface fairness for a minimum number of NURBS control points.


Sign in / Sign up

Export Citation Format

Share Document