scholarly journals Impact of Model Complexity in the Monitoring of Machine Tools Condition Using Volumetric Errors

2020 ◽  
Vol 14 (3) ◽  
pp. 369-379
Author(s):  
Kanglin Xing ◽  
◽  
J. R. R. Mayer ◽  
Sofiane Achiche

The scale and master ball artefact (SAMBA) method allows estimating the inter- and intra-axis error parameters as well as volumetric errors (VEs) of a five-axis machine tool by using simple ball artefacts and the machine tool’s own touch-trigger probe. The SAMBA method can use two different machine error models named after the number of model parameters, i.e., the “13” and “84” machine error models, to estimate the VEs. In this study, we compare these two machine error models when using VE vector directions and values for monitoring the machine tool condition for three cases of machine malfunctions: 1) a C-axis encoder fault, 2) an induced X-axis linear positioning error, and 3) an induced straightness error simulated fault. The results show that the “13” machine error model produces more focused concentrated VE directions but smaller VE values when compared with the “84” machine error model; furthermore, although both models can recognize the three faults and are effective in monitoring the machine tool condition, the “13” machine error model achieves a better recognition rate of the machine condition. This paper provides guidelines for selecting machine error models for the SAMBA method when using VEs to monitor the machine tool condition.

2019 ◽  
Vol 141 (12) ◽  
Author(s):  
Hua-Wei Ko ◽  
Patrick Bazzoli ◽  
J. Adam Nisbett ◽  
Douglas Bristow ◽  
Yujie Chen ◽  
...  

Abstract A parameter identification procedure for identifying the parameters of a volumetric error model of a large machine tool requires hundreds of random volumetric error components in its workspace and thus takes hours of measurement time. It causes thermal errors of a large machine difficult to be tracked and compensated periodically. This paper demonstrates the application of the optimal observation design theories to volumetric error model parameter identification of a large five-axis machine. Optimal designs maximize the amount of information carried in the observations. In this paper, K-optimal designs are applied for the construction of machine-tool error observers by determining locations in the workspace at which 80 components of volumetric errors to be measured so that the model parameters can be identified in 5% of an 8-h shift. Many of optimal designs tend to localize observations at the boundary of the workspace. This leaves large volumes of the workspace inadequately represented, making the identified model inadequate. Therefore, the constrained optimization algorithms that force the distribution of observation points in the machine’s workspace are developed. Optimal designs reduce the number of observations in the identification procedure. This opens up the possibility of tracking thermal variations of the volumetric error model with periodic measurements. The design, implementation, and performance of a constrained K-optimal in tracking the thermal variations of the volumetric error over a 400-min period of operation are also reported. About 70–80% of machine-tool error can be explained using the proposed thermal error modeling methodology.


Author(s):  
Le Ma ◽  
Douglas A. Bristow ◽  
Robert G. Landers

New metrology tools, such as laser trackers, are enabling the rapid collection of machine tool geometric error over a wide range of the workspace. Error models fit to this data are used to compensate for high-order geometric errors that were previously challenging to obtain due to limited data sets. However, model fitting accuracy can suffer near the edges of the measurable space where obstacles and interference of the metrology equipment can make it difficult to collect dense data sets. In some instances, for example when obstacles are permanent fixtures, these locations are difficult to measure but critically important for machining, and thus models need to be accurate at these locations. In this paper, a method is proposed to evaluate the model accuracy for five-axis machine tools at measurement boundaries by characterizing the statistical consistency of the model fit over the workspace. Using a representative machine tool compensation method, the modeled Jacobian matrix is derived and used for characterization. By constructing and characterizing different polynomial order error models, it is observed that the function behavior at the boundary and in the unmeasured space is inconsistent with the function behavior in the interior space, and that the inconsistency increases as the polynomial order increases. Also, the further the model is extrapolated into unmeasured space, the more inconsistent the kinematic error model behaves.


Author(s):  
Hua-Wei Ko ◽  
Shiv G. Kapoor ◽  
Placid M. Ferreira ◽  
Patrick Bazzoli ◽  
J. Adam Nisbett ◽  
...  

A parameter identification procedure for identifying the parameters of a volumetric error model of a large and complex machine tool usually requires a large number of observations of volumetric error components in its workspace. This paper demonstrates the possibility of applying optimal observation/experimental design theories to volumetric error model parameter identification of a large 5-axis machine with one redundant axis. Several designs such as A-, D- and K-optimal designs seek to maximize the amount of information carried in the observations made in an experiment. In this paper, we adapt these design approaches in the construction of machine-tool error observers by determining locations in the workspace at which components of volumetric errors must be measured so that the underlying error model parameters can be identified. Many of optimal designs tend to localize observations at either the center or the boundary of the workspace. This can leave large volumes of the workspace inadequately represented, making the identified model parameters particularly susceptible to model inadequacy issues. Therefore, we develop constrained optimization algorithms that force the distribution of observation points in the machine’s workspace. Optimal designs provide the possibility of efficiency (reduced number of observations and hence reduced measurement time) in the identification procedure. This opens up the possibility of tracking thermal variations of the volumetric error model with periodic quick measurements. We report on the design, implementation and performance of a constrained K-optimal in tracking the thermal variations of the volumetric error over a 5.5 hour period of operations with measurements being made each hour.


Author(s):  
Sareh Esmaeili ◽  
René Mayer ◽  
Mark Sanders ◽  
Philipp Dahlem ◽  
Kanglin Xing

Abstract Modern CNC machine tools provide lookup tables to enhance the machine tool's precision but the generation of table entries can be a demanding task. In this paper, the coefficients of the 25 cubic polynomial functions used to generate the LUTs entries for a five-axis machine tool are obtained by solving a linear system incorporating a Vandermonde expansion of the nominal control jacobian. The necessary volumetric errors within the working volume are predicted from machine's geometric errors estimated by the indirect error identification method based on the on-machine touch probing measurement of a reconfigurable uncalibrated master ball artefact (RUMBA). The proposed scheme is applied to a small Mitsubishi M730 CNC machine. Two different error models are used for modeling the erroneous machine tool, one estimating mainly inter-axis errors and the other including numerous intra-axis errors. The table-based compensation is validated through additional on-machine measurements. Experimental tests demonstrate a significant reduction in volumetric errors and in the effective machine error parameters. The LUTs reduce most of the dominant machine error parameters. It is concluded that although being effective in correcting some geometric errors, the generated LUTs cannot compensate some axis misalignments such as EB(OX)A and EB(OX)Z. The Root Mean Square of the translational volumetric errors are improved from 87.3, 75.4 and 71.5 µm down to 24.8, 18.8 and 22.1 µm in the X, Y and Z directions, respectively.


CIRP Annals ◽  
2019 ◽  
Vol 68 (1) ◽  
pp. 555-558 ◽  
Author(s):  
Kanglin Xing ◽  
Xavier Rimpault ◽  
J.R.R. Mayer ◽  
Jean-François Chatelain ◽  
Sofiane Achiche

Author(s):  
Peng Xu ◽  
Benny C. F. Cheung ◽  
Bing Li

Calibration is an important way to improve and guarantee the accuracy of machine tools. This paper presents a systematic approach for position independent geometric errors (PIGEs) calibration of five-axis machine tools based on the product of exponentials (POE) formula. Instead of using 4 × 4 homogeneous transformation matrices (HTMs), it establishes the error model by transforming the 6 × 1 error vectors of rigid bodies between different frames resorting to 6 × 6 adjoint transformation matrices. A stable and efficient error model for the iterative identification of PIGEs should satisfy the requirements of completeness, continuity, and minimality. Since the POE-based error models for five-axis machine tools calibration are naturally complete and continuous, the key issue is to ensure the minimality by eliminating the redundant parameters. Three kinds of redundant parameters, which are caused by joint symmetry information, tool-workpiece metrology, and incomplete measuring data, are illustrated and explained in a geometrically intuitive way. Hence, a straightforward process is presented to select the complete and minimal set of PIGEs for five-axis machine tools. Based on the established unified and compact error Jacobian matrices, observability analyses which quantitatively describe the identification efficiency are conducted and compared for different kinds of tool tip deviations obtained from several commonly used measuring devices, including the laser tracker, R-test, and double ball-bar. Simulations are conducted on a five-axis machine tool to illustrate the application of the calibration model. The effectiveness of the model is also verified by experiments on a five-axis machine tool by using a double ball-bar.


Environments ◽  
2019 ◽  
Vol 6 (12) ◽  
pp. 124
Author(s):  
Johannes Ranke ◽  
Stefan Meinecke

In the kinetic evaluation of chemical degradation data, degradation models are fitted to the data by varying degradation model parameters to obtain the best possible fit. Today, constant variance of the deviations of the observed data from the model is frequently assumed (error model “constant variance”). Allowing for a different variance for each observed variable (“variance by variable”) has been shown to be a useful refinement. On the other hand, experience gained in analytical chemistry shows that the absolute magnitude of the analytical error often increases with the magnitude of the observed value, which can be explained by an error component which is proportional to the true value. Therefore, kinetic evaluations of chemical degradation data using a two-component error model with a constant component (absolute error) and a component increasing with the observed values (relative error) are newly proposed here as a third possibility. In order to check which of the three error models is most adequate, they have been used in the evaluation of datasets obtained from pesticide evaluation dossiers published by the European Food Safety Authority (EFSA). For quantitative comparisons of the fits, the Akaike information criterion (AIC) was used, as the commonly used error level defined by the FOrum for the Coordination of pesticide fate models and their USe(FOCUS) is based on the assumption of constant variance. A set of fitting routines was developed within the mkin software package that allow for robust fitting of all three error models. Comparisons using parent only degradation datasets, as well as datasets with the formation and decline of transformation products showed that in many cases, the two-component error model proposed here provides the most adequate description of the error structure. While it was confirmed that the variance by variable error model often provides an improved representation of the error structure in kinetic fits with metabolites, it could be shown that in many cases, the two-component error model leads to a further improvement. In addition, it can be applied to parent only fits, potentially improving the accuracy of the fit towards the end of the decline curve, where concentration levels are lower.


2017 ◽  
Author(s):  
Mario R. Hernández-López ◽  
Félix Francés

Abstract. Over the years, the Standard Least Squares (SLS) has been the most commonly adopted criterion for the calibration of hydrological models, despite the fact that they generally do not fulfill the assumptions made by the SLS method: very often errors are autocorrelated, heteroscedastic, biased and/or non-Gaussian. Similarly to recent papers, which suggest more appropriate models for the errors in hydrological modeling, this paper addresses the challenging problem of jointly estimate hydrological and error model parameters (joint inference) in a Bayesian framework, trying to solve some of the problems found in previous related researches. This paper performs a Bayesian joint inference through the application of different inference models, as the known SLS or WLS and the new GL++ and GL++Bias error models. These inferences were carried out on two lumped hydrological models which were forced with daily hydrometeorological data from a basin of the MOPEX project. The main finding of this paper is that a joint inference, to be statistically correct, must take into account the joint probability distribution of the state variable to be predicted and its deviation from the observations (the errors). Consequently, the relationship between the marginal and conditional distributions of this joint distribution must be taken into account in the inference process. This relation is defined by two general statistical expressions called the Total Laws (TLs): the Total Expectation and the Total Variance Laws. Only simple error models, as SLS, do not explicitly need the TLs implementation. An important consequence of the TLs enforcement is the reduction of the degrees of freedom in the inference problem namely, the reduction of the parameter space dimension. This research demonstrates that non-fulfillment of TLs produces incorrect error and hydrological parameter estimates and unreliable predictive distributions. The target of a (joint) inference must be fulfilling the error model hypotheses rather than to achieve the better fitting to the observations. Consequently, for a given hydrological model, the resulting performance of the prediction, the reliability of its predictive uncertainty, as well as the robustness of the parameter estimates, will be exclusively conditioned by the degree in which errors fulfill the error model hypotheses.


2012 ◽  
Vol 364 ◽  
pp. 012091 ◽  
Author(s):  
S Sztendel ◽  
C Pislaru ◽  
A P Longstaff ◽  
S Fletcher ◽  
A Myers

2012 ◽  
Vol 152-154 ◽  
pp. 781-787
Author(s):  
Jian Yin ◽  
Ming Li ◽  
Fang Yu Pan

Enhancing the accuracy of machine tool is a key goal of machine tool manufactures and users. To characterize the Quasi-static errors and then use software compensation is an important step for accuracy enhancement. The effectiveness of an error compensation scheme relies heavily on the error model. The model must be concise and roust which can be applied to any machine tool. The total Quasi-static errors within the workspace of a five-axis gantry machine tool is composed of geometric error, kinematic error, thermal error. This paper presents an error model which can be used for practical compensation scheme. Homogeneous transformation matrix, rigid body kinematic and small angle approximations are used in this paper for error modeling.


Sign in / Sign up

Export Citation Format

Share Document