scholarly journals The Shape From Motion Approach to Rapid and Precise Force/Torque Sensor Calibration

1997 ◽  
Vol 119 (2) ◽  
pp. 229-235 ◽  
Author(s):  
R. M. Voyles ◽  
J. D. Morrow ◽  
P. K. Khosla

We present a new technique for multi-axis force/torque sensor calibration called shape from motion. The novel aspect of this technique is that it does not require explicit knowledge of the redundant applied load vectors, yet it retains the noise rejection of a highly redundant data set and the rigor of least squares. The result is a much faster, slightly more accurate calibration procedure. A constant-magnitude force (produced by a mass in a gravity field) is randomly moved through the sensing space while raw data is continuously gathered. Using only the raw sensor signals, the motion of the force vector (the “motion”) and the calibration matrix (the “shape”) are simultaneously extracted by singular value decomposition. We have applied this technique to several types of force/torque sensors and present experimental results for a 2-DOF fingertip and a 6-DOF wrist sensor with comparisons to the standard least squares approach.

Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2006 ◽  
Vol 06 (04) ◽  
pp. 373-384
Author(s):  
ERIC BERTHONNAUD ◽  
JOANNÈS DIMNET

Joint centers are obtained from data treatment of a set of markers placed on the skin of moving limb segments. Finite helical axis (FHA) parameters are calculated between time step increments. Artifacts associated with nonrigid body movements of markers entail ill-determination of FHA parameters. Mean centers of rotation may be calculated over the whole movement, when human articulations are likened to spherical joints. They are obtained using numerical technique, defining point with minimal amplitude, during joint movement. A new technique is presented. Hip, knee, and ankle mean centers of rotation are calculated. Their locations depend on the application of two constraints. The joint center must be located next to the estimated geometric joint center. The geometric joint center may migrate inside a cube of possible location. This cube of error is located with respect to the marker coordinate systems of the two limb segments adjacent to the joint. Its position depends on the joint and the patient height, and is obtained from a stereoradiographic study with specimen. The mean position of joint center and corresponding dispersion are obtained through a minimization procedure. The location of mean joint center is compared with the position of FHA calculated between different sequential steps: time sequential step, and rotation sequential step where a minimal rotation amplitude is imposed between two joint positions. Sticks are drawn connecting adjacent mean centers. The animation of stick diagrams allows clinical users to estimate the displacements of long bones (femur and tibia) from the whole data set.


Author(s):  
Sauro Mocetti

Abstract This paper contributes to the growing number of studies on intergenerational mobility by providing a measure of earnings elasticity for Italy. The absence of an appropriate data set is overcome by adopting the two-sample two-stage least squares method. The analysis, based on the Survey of Household Income and Wealth, shows that intergenerational mobility is lower in Italy than it is in other developed countries. We also examine the reasons why the long-term labor market success of children is related to that of their fathers.


Author(s):  
Craig M. Shakarji ◽  
Vijay Srinivasan

We present elegant algorithms for fitting a plane, two parallel planes (corresponding to a slot or a slab) or many parallel planes in a total (orthogonal) least-squares sense to coordinate data that is weighted. Each of these problems is reduced to a simple 3×3 matrix eigenvalue/eigenvector problem or an equivalent singular value decomposition problem, which can be solved using reliable and readily available commercial software. These methods were numerically verified by comparing them with brute-force minimization searches. We demonstrate the need for such weighted total least-squares fitting in coordinate metrology to support new and emerging tolerancing standards, for instance, ISO 14405-1:2010. The widespread practice of unweighted fitting works well enough when point sampling is controlled and can be made uniform (e.g., using a discrete point contact Coordinate Measuring Machine). However, we demonstrate that nonuniformly sampled points (arising from many new measurement technologies) coupled with unweighted least-squares fitting can lead to erroneous results. When needed, the algorithms presented also solve the unweighted cases simply by assigning the value one to each weight. We additionally prove convergence from the discrete to continuous cases of least-squares fitting as the point sampling becomes dense.


Processes ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 166
Author(s):  
Majed Aljunaid ◽  
Yang Tao ◽  
Hongbo Shi

Partial least squares (PLS) and linear regression methods are widely utilized for quality-related fault detection in industrial processes. Standard PLS decomposes the process variables into principal and residual parts. However, as the principal part still contains many components unrelated to quality, if these components were not removed it could cause many false alarms. Besides, although these components do not affect product quality, they have a great impact on process safety and information about other faults. Removing and discarding these components will lead to a reduction in the detection rate of faults, unrelated to quality. To overcome the drawbacks of Standard PLS, a novel method, MI-PLS (mutual information PLS), is proposed in this paper. The proposed MI-PLS algorithm utilizes mutual information to divide the process variables into selected and residual components, and then uses singular value decomposition (SVD) to further decompose the selected part into quality-related and quality-unrelated components, subsequently constructing quality-related monitoring statistics. To ensure that there is no information loss and that the proposed MI-PLS can be used in quality-related and quality-unrelated fault detection, a principal component analysis (PCA) model is performed on the residual component to obtain its score matrix, which is combined with the quality-unrelated part to obtain the total quality-unrelated monitoring statistics. Finally, the proposed method is applied on a numerical example and Tennessee Eastman process. The proposed MI-PLS has a lower computational load and more robust performance compared with T-PLS and PCR.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Johannes Masino ◽  
Jakob Thumm ◽  
Guillaume Levasseur ◽  
Michael Frey ◽  
Frank Gauterin ◽  
...  

This work aims at classifying the road condition with data mining methods using simple acceleration sensors and gyroscopes installed in vehicles. Two classifiers are developed with a support vector machine (SVM) to distinguish between different types of road surfaces, such as asphalt and concrete, and obstacles, such as potholes or railway crossings. From the sensor signals, frequency-based features are extracted, evaluated automatically with MANOVA. The selected features and their meaning to predict the classes are discussed. The best features are used for designing the classifiers. Finally, the methods, which are developed and applied in this work, are implemented in a Matlab toolbox with a graphical user interface. The toolbox visualizes the classification results on maps, thus enabling manual verification of the results. The accuracy of the cross-validation of classifying obstacles yields 81.0% on average and of classifying road material 96.1% on average. The results are discussed on a comprehensive exemplary data set.


Solid Earth ◽  
2016 ◽  
Vol 7 (2) ◽  
pp. 481-492 ◽  
Author(s):  
Faisal Khan ◽  
Frieder Enzmann ◽  
Michael Kersten

Abstract. Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


Sign in / Sign up

Export Citation Format

Share Document