volumetric errors
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 15)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Bence Tipary ◽  
Ferenc Gábor Erdős

Purpose The purpose of this paper is to propose a novel measurement technique and a modelless calibration method for improving the positioning accuracy of a three-axis parallel kinematic machine (PKM). The aim is to present a low-cost calibration alternative, for small and medium-sized enterprises, as well as educational and research teams, with no expensive measuring devices at their disposal. Design/methodology/approach Using a chessboard pattern on a ground-truth plane, a digital indicator, a two-dimensional eye-in-hand camera and a laser pointer, positioning errors are explored in the machine workspace. With the help of these measurements, interpolation functions are set up per direction, resulting in an interpolation vector function to compensate the volumetric errors in the workspace. Findings Based on the proof-of-concept system for the linear-delta PKM, it is shown that using the proposed measurement technique and modelless calibration method, positioning accuracy is significantly improved using simple setups. Originality/value In the proposed method, a combination of low-cost devices is applied to improve the three-dimensional positioning accuracy of a PKM. By using the presented tools, the parametric kinematic model is not required; furthermore, the calibration setup is simple, there is no need for hand–eye calibration and special fixturing in the machine workspace.


2021 ◽  
Author(s):  
Miro Demol ◽  
Kim Calders ◽  
Hans Verbeeck ◽  
Bert Gielen

Abstract Background and Aims Quantifying the Earth’s forest aboveground biomass (AGB) is indispensable for effective climate action and developing forest policy. Yet, current allometric scaling models (ASM) to estimate AGB suffer several drawbacks related to model selection and calibration data traceability uncertainties. Terrestrial laser scanning (TLS) offers a promising non-destructive alternative. Tree volume is reconstructed from TLS point clouds with Quantitative Structure Models (QSM) and converted to AGB with wood basic density. Earlier studies have found overall TLS-derived forest volume estimates to be accurate, but highlighted problems for reconstructing finer branches. Our objective was to evaluate TLS for estimating tree volumes by comparison with reference volumes and volumes from ASMs. Methods We quantified the woody volume of 65 trees in Belgium (77 – 2.800 L; Pinus sylvestris, Fagus sylvatica, Larix decidua, Fraxinus excelsior) with QSMs and destructive reference measurements. We tested a volume expansion factor (VEF) approach by multiplying the solid and merchantable volume from QSM with literature VEF values. Key Results Stem volume was reliably estimated with TLS. Total volume was overestimated by +21% using original QSMs, by +9% and -12% using two sets of VEF-augmented QSMs, and by -7.3% using best-available allometric models. The most accurate method differed per site, and the prediction errors for each method varied considerably between sites. Conclusions VEF-augmented QSMs were only slightly better than original QSMs for estimating tree volume for common species in temperate forests. Despite satisfying estimates with ASMs, the model choice was a large source of uncertainty, and species-specific models did not always exist. Therefore, we advocate for further improving tree volume reconstructions with QSMs, especially for fine branches, instead of collecting more ground-truth data to calibrate VEF and allometric models. Promising developments such as improved coregistration and smarter filtering approaches are ongoing to further constrain volumetric errors in TLS-derived estimates.


2021 ◽  
Author(s):  
Shixiang Wang ◽  
Chi Fai Cheung ◽  
Lingbao Kong

Abstract In this paper, a fiducial-aided reconfigurable artefact is presented for estimating volumetric errors of a multi-axis machine tools. The artefact makes use of an adjustable number of standard balls as fiducials to build a 3D artefact which has been calibrated on a coordinate measuring machine (CMM). This 3D artefact shows its reconfigurability in its number of fiducials and their locations according to the characteristics of workpieces and machine tools. The developed kinematics of the machine tool was employed to identify the volumetric errors in the working space by comparing the information acquired by the on-machine metrology with that by the CMM. Experimental studies are conducted on a five-axis ultra-precision machine tools mounted with the 3D artefact composed of five standard spheres. Factors including the gravity effect and measurement repeatability are examined for the optimization of the geometry of the artefact. The results show that the developed 3D artefact is able to provide information of the volume occupied by the workpiece.


Author(s):  
Liang Cheng ◽  
Li Zhang ◽  
Jiangxiong Li ◽  
Yinglin Ke

This paper presents a novel modeling and compensation method for the volumetric errors of a six-axis gantry automated fiber placement (AFP) machine. Based on the screw theory, the forward and inverse kinematics models of the AFP machine are established. In order to improve the accuracy of the inverse kinematics solution, the Paden-Kahan sub-problem method is used to perform the inverse kinematics solution for the simplified topology of the rotary axes. Using error motion twist to establish a volumetric error transfer model for 54 geometric errors. According to the measurement data of a laser tracker and the Levenberg-Marquardt method to identify the geometric error parameters. The explicit formula of the inverse kinematics solution is used to obtain the error compensation of each motion axis, and the G code of the laying path is modified by the iterative method to realize the compensation of the volumetric errors. By comparing the positions of the tool center point before and after the error compensation, the practicability of the volumetric error modeling and iterative compensation methods are verified, and the geometric accuracy of the AFP machine can be effectively improved.


2021 ◽  
Author(s):  
Elhadi Mohsen Hassan Abdalla ◽  
Vincent Pons ◽  
Virginia Stovin ◽  
Simon De-Ville ◽  
Elizabeth Fassman-Beck ◽  
...  

Abstract. Green roofs are increasingly popular measures to permanently reduce or delay stormwater runoff. Conceptual and physically-based hydrological models are powerful tools to estimate their performance. However, physically-based models are associated with a high level of complexity and computation costs while parameters of conceptual models are more difficult to obtain when measurements are not available for calibration. The main objective of the study was to examine the potential of using machine learning (ML) to simulate runoff from green roofs to estimate their hydrological performance. Four machine learning methods, Artificial Neural Network (ANN), M5 Model tree, Long Short-Term Memory (LSTM) and k-Nearest Neighbour (kNN) were applied to simulate stormwater runoff from sixteen extensive green roofs located in four Norwegian cities across different climatic zones. The potential of these ML methods for estimating green roof retention was assessed by comparing their simulations with a proven conceptual retention model. Furthermore, the transferability of ML models between the different green roofs in the study was tested to investigate the potential of using ML models as a tool for planning and design purposes. The ML models yielded low volumetric errors that were comparable with the conceptual retention models, which indicates good performance in estimating annual retention. The ML models yielded satisfactory modelling results (NSE > 0.5) in both training and validation data which indicates an ability to estimate green roof detention. The variations in ML models' performance between the cities was larger than between the different configurations, which was attributed to the different climatic characteristics between the four cities. Transferred ML models between cities with similar rainfall events characteristics (Bergen–Sandnes, Trondheim–Oslo) could yield satisfactory modelling performance (NSE > 0.5, |PBIAS| 


Author(s):  
Sareh Esmaeili ◽  
René Mayer ◽  
Mark Sanders ◽  
Philipp Dahlem ◽  
Kanglin Xing

Abstract Modern CNC machine tools provide lookup tables to enhance the machine tool's precision but the generation of table entries can be a demanding task. In this paper, the coefficients of the 25 cubic polynomial functions used to generate the LUTs entries for a five-axis machine tool are obtained by solving a linear system incorporating a Vandermonde expansion of the nominal control jacobian. The necessary volumetric errors within the working volume are predicted from machine's geometric errors estimated by the indirect error identification method based on the on-machine touch probing measurement of a reconfigurable uncalibrated master ball artefact (RUMBA). The proposed scheme is applied to a small Mitsubishi M730 CNC machine. Two different error models are used for modeling the erroneous machine tool, one estimating mainly inter-axis errors and the other including numerous intra-axis errors. The table-based compensation is validated through additional on-machine measurements. Experimental tests demonstrate a significant reduction in volumetric errors and in the effective machine error parameters. The LUTs reduce most of the dominant machine error parameters. It is concluded that although being effective in correcting some geometric errors, the generated LUTs cannot compensate some axis misalignments such as EB(OX)A and EB(OX)Z. The Root Mean Square of the translational volumetric errors are improved from 87.3, 75.4 and 71.5 µm down to 24.8, 18.8 and 22.1 µm in the X, Y and Z directions, respectively.


Author(s):  
Zaur A. Alderov ◽  
Evgeny V. Rozengauz ◽  
Denis Nesterov

One of the the widely used way to follow up oncological disease is estimation of lesion size differences. Volumetry is one of the most accurate approaches of lesion size estimation. However, being highly sensitive, volumetric errors can reach 60%, which significantly limits the applicability of the method. Purpose was to estimate the effect of reconstruction parameters on volumetry error. Materials and methods. 32 patients with pulmonary metastases underwent a CT scanning with 326 foci detected. 326 pulmonary were segmented. Volumetry error was estimated for every lesion with each combination of slice thickness and reconstruction kernel. The effect was measured with linear regression analysis Results. Systematic and stochastic errors are impacted by slice thickness, reconstruction kernel, lesion position and its diameter. FC07 kernel and larger slice thickness is associated with high systematic error. Both systematic and stochastic errors decrease with lesion enlargment. intrapulmonary lesions have the lowest error regardless the reconstruction parameters. Lineal regression model was created to prognose error rate. Model standart error was 6.7%. There was corelation between model remnants deviation and slice thickness, reconstruction kernel, lesion position and its diameter. Conclusion. The systematic error depends on the focal diameter, slice thickness and reconstruction kernel. It can be estimated using the proposed model with a 6% error. Stochastic error mainly depends on lesion size.


2020 ◽  
Vol 14 (3) ◽  
pp. 369-379
Author(s):  
Kanglin Xing ◽  
◽  
J. R. R. Mayer ◽  
Sofiane Achiche

The scale and master ball artefact (SAMBA) method allows estimating the inter- and intra-axis error parameters as well as volumetric errors (VEs) of a five-axis machine tool by using simple ball artefacts and the machine tool’s own touch-trigger probe. The SAMBA method can use two different machine error models named after the number of model parameters, i.e., the “13” and “84” machine error models, to estimate the VEs. In this study, we compare these two machine error models when using VE vector directions and values for monitoring the machine tool condition for three cases of machine malfunctions: 1) a C-axis encoder fault, 2) an induced X-axis linear positioning error, and 3) an induced straightness error simulated fault. The results show that the “13” machine error model produces more focused concentrated VE directions but smaller VE values when compared with the “84” machine error model; furthermore, although both models can recognize the three faults and are effective in monitoring the machine tool condition, the “13” machine error model achieves a better recognition rate of the machine condition. This paper provides guidelines for selecting machine error models for the SAMBA method when using VEs to monitor the machine tool condition.


2020 ◽  
Vol 108 (1-2) ◽  
pp. 299-312
Author(s):  
Yu-Wen Chen ◽  
Yao-Fu Huang ◽  
Kuo-Tsai Wu ◽  
Sheng-Jye Hwang ◽  
Huei-Huang Lee

2020 ◽  
Author(s):  
zhengeng yang ◽  
Hongshan Yu ◽  
Shunxin Cao ◽  
Wenyan Jia ◽  
Qi Xu ◽  
...  

Abstract Background: It is well-known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed that allows assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage overweight, obesity and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of objective and passive dietary assessment with much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms.Method : A novel Artificial Intelligent (AI) system is proposed to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system "mentally" gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an "intelligent guess", implemented by an inner product between the probability vector and the reference volume vector.Dataset: The datasets utilized for model validation include: 1) two virtual food datasets produced by computer simulation, and 2) two real-world food datasets collected by us.Results: The average relative volumetric errors of our AI method were less than 9% on both virtual datasets, and 11.7% and 20.1% , respectively, on the two real-world food datasets.Discussion: We discuss: 1) the use of AI to estimate the "relative volume" of food in a plate, 2) the case of multiple foods in a plate, and 3) the potential of AI in advancing nutrition science.Conclusion: Our AI system is able to use the same food volume estimation strategy as the human uses.


Sign in / Sign up

Export Citation Format

Share Document