scholarly journals Implementation and use of the Levenberg-Marquard algorithm in the problems of calibration of robotic manipulators

Author(s):  
Yuriy Mihailovich Andrjejev

The well-known problem of calibration of an arbitrary robotic manipulator, which is formulated in the most general form, is considered. To solve the direct problem of kinematics, an alternative to the Denavit-Hartenberg method, a universal analytical description of the kinematic scheme, taking into account possible errors in the manufacture and assembly of robot parts, is proposed. At the same time, a universal description of the errors in the orientation of the axes of the articulated joints of the links is proposed. On the basis of such a description, the direct and inverse problem of kinematics of robots as spatial mechanisms can be solved, taking into account the distortions of dimensions, the position of the axes of the joints and the positions of the zeros of the angles of their rotation. The problem of calibration of manipulators is formulated as a problem of the least squares method. Analytical formulas of the objective function of the least squares method for solving the problem are obtained. Expressions for the gradient vector and the Hessian of the objective function for the direct algorithm, Newton-Gauss and Levenberg-Marquardt algorithms are obtained by analytical differentiation using a special computer algebra system KiDyM. The procedures in the C ++ language for calculating the elements of the gradient and hessian are automatically generated. On the example of a projected angular 6-degree robot-manipulator, the results of modeling the solution to the problem of its calibration, that is, determination of 36 unknown angular and linear errors, are presented. A comparison is made of the solution of the calibration problem for simulated 64 and 729 experiments, in which the generalized coordinates - the angles in the joints took the values ±90° and -90°, 0, +90°.

2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


1964 ◽  
Vol 54 (6A) ◽  
pp. 2037-2047
Author(s):  
Agustin Udias

abstract In this paper a numerical approach to the determination of focal mechanisms based on the observation of the polarization of the S wave at N stations is presented. Least-square methods are developed for the determination of the orientation of the single and double couple sources. The methods allow a statistical evaluation of the data and of the accuracy of the solutions.


BIOMATH ◽  
2016 ◽  
Vol 5 (1) ◽  
pp. 1604231
Author(s):  
A.N. Pete ◽  
Peter Mathye ◽  
Igor Fedotov ◽  
Michael Shatalov

An inverse numerical method that estimate parameters of dynamic mathematical models given some information about unknown trajectories at some time is applied to examples taken from Biology and Ecology. The method consisting of determining an over-determined system of algebraic equations using experimental data. The solution of the over-determined system is then obtained using, for example the least-squares method. To illustrate the effectiveness of the method an analysis of examples and corresponding numerical example are presented.


Vestnik MGSU ◽  
2015 ◽  
pp. 140-151 ◽  
Author(s):  
Aleksey Alekseevich Loktev ◽  
Daniil Alekseevich Loktev

In modern integrated monitoring systems and systems of automated control of technological processes there are several essential algorithms and procedures for obtaining primary information about an object and its behavior. The primary information is characteristics of static and moving objects: distance, speed, position in space etc. In order to obtain such information in the present work we proposed to use photos and video detectors that could provide the system with high-quality images of the object with high resolution. In the modern systems of video monitoring and automated control there are several ways of obtaining primary data on the behaviour and state of the studied objects: a multisensor approach (stereovision), building an image perspective, the use of fixed cameras and additional lighting of the object, and a special calibration of photo or video detector.In the present paper the authors develop a method of determining the distances to objects by analyzing a series of images using depth evaluation using defocusing. This method is based on the physical effect of the dependence of the determined distance to the object on the image from the focal length or aperture of the lens. When focusing the photodetector on the object at a certain distance, the other objects both closer and farther than a focal point, form a spot of blur depending on the distance to them in terms of images. Image blur of an object can be of different nature, it may be caused by the motion of the object or the detector, by the nature of the image boundaries of the object, by the object’s aggregate state, as well as by different settings of the photo-detector (focal length, shutter speed and aperture).When calculating the diameter of the blur spot it is assumed that blur at the point occurs equally in all directions. For more precise estimates of the geometrical parameters determination of the behavior and state of the object under study a statistical approach is used to determine the individual parameters and estimate their accuracy. A statistical approach is used to evaluate the deviation of the dependence of distance from the blur from different types of standard functions (logarithmic, exponential, linear). In the statistical approach the evaluation method of least squares and the method of least modules are included, as well as the Bayesian estimation, for which it is necessary to minimize the risks under different loss functions (quadratic, rectangular, linear) with known probability density (we consider normal, lognormal, Laplace, uniform distribution). As a result of the research it was established that the error variance of a function, the parameters of which are estimated using the least squares method, will be less than the error variance of the method of least modules, that is, the evaluation method of least squares is more stable. Also the errors’ estimation when using the method of least squares is unbiased, whereas the mathematical expectation when using the method of least modules is not zero, which indicates the displacement of error estimations. Therefore it is advisable to use the least squares method in the determination of the parameters of the function.In order to smooth out the possible outliers we use the Kalman filter to process the results of the initial observations and evaluation analysis, the method of least squares and the method of least three standard modules for the functions after applying the filter with different coefficients.


1970 ◽  
Vol 26 (2) ◽  
pp. 295-296 ◽  
Author(s):  
K. Tichý

An appropriate choice of the function minimized permits linearization of the least-squares determination of the matrix which transforms the diffraction indices into the components of the reciprocal vector in the diffractometer φ-axis system of coordinates. The coefficients of the least-squares equations are based on diffraction indices and measured diffractometer angles of three or more non-coplanar setting reflexions.


Sign in / Sign up

Export Citation Format

Share Document