scholarly journals Calibration and Validation of a Cone Crusher Model with Industrial Data

Minerals ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1256
Author(s):  
Robson A. Duarte ◽  
André S. Yamashita ◽  
Moisés T. da Silva ◽  
Luciano P. Cota ◽  
Thiago A. M. Euzébio

This paper reports the calibration and validation of a cone crusher model using industrial data. Usually, there are three calibration parameters in the condensed breakage function; by contrast, in this work, every entry of the lower triangular breakage function matrix is considered a calibration parameter. The calibration problem is cast as an optimization problem based on the least squares method. The results show that the calibrated model is able to fit the validation datasets closely, as seen from the low values of the objective function. Another significant advantage of the proposed approach is that the model can be calibrated on data that are usually available from industrial operation; no additional laboratory tests are required. Calibration and validation tests on datasets collected from two different mines show that the calibrated model is a strong candidate for use in various dynamic simulation applications, such as control system design, equipment sizing, operator training, and optimization of crushing circuits.

Author(s):  
Manuel Arias Chao ◽  
Darrel S. Lilley ◽  
Peter Mathé ◽  
Volker Schloßhauer

Calibration and uncertainty quantification for gas turbine (GT) performance models is a key activity for GT manufacturers. The adjustment between the numerical model and measured GT data is obtained with a calibration technique. Since both, the calibration parameters and the measurement data are uncertain the calibration process is intrinsically stochastic. Traditional approaches for calibration of a numerical GT model are deterministic. Therefore, quantification of the remaining uncertainty of the calibrated GT model is not clearly derived. However, there is the business need to provide the probability of the GT performance predictions at tested or untested conditions. Furthermore, a GT performance prediction might be required for a new GT model when no test data for this model are available yet. In this case, quantification of the uncertainty of the baseline GT, upon which the new development is based on, and propagation of the design uncertainty for the new GT is required for risk assessment and decision making reasons. By using as a benchmark a GT model, the calibration problem is discussed and several possible model calibration methodologies are presented. Uncertainty quantification based on both a conventional least squares method and a Bayesian approach will be presented and discussed. For the general nonlinear model a fully Bayesian approach is conducted, and the posterior of the calibration problem is computed based on a Markov Chain Monte Carlo simulation using a Metropolis-Hastings sampling scheme. When considering the calibration parameters dependent on operating conditions, a novel formulation of the GT calibration problem is presented in terms of a Gaussian process regression problem.


Author(s):  
Yuriy Mihailovich Andrjejev

The well-known problem of calibration of an arbitrary robotic manipulator, which is formulated in the most general form, is considered. To solve the direct problem of kinematics, an alternative to the Denavit-Hartenberg method, a universal analytical description of the kinematic scheme, taking into account possible errors in the manufacture and assembly of robot parts, is proposed. At the same time, a universal description of the errors in the orientation of the axes of the articulated joints of the links is proposed. On the basis of such a description, the direct and inverse problem of kinematics of robots as spatial mechanisms can be solved, taking into account the distortions of dimensions, the position of the axes of the joints and the positions of the zeros of the angles of their rotation. The problem of calibration of manipulators is formulated as a problem of the least squares method. Analytical formulas of the objective function of the least squares method for solving the problem are obtained. Expressions for the gradient vector and the Hessian of the objective function for the direct algorithm, Newton-Gauss and Levenberg-Marquardt algorithms are obtained by analytical differentiation using a special computer algebra system KiDyM. The procedures in the C ++ language for calculating the elements of the gradient and hessian are automatically generated. On the example of a projected angular 6-degree robot-manipulator, the results of modeling the solution to the problem of its calibration, that is, determination of 36 unknown angular and linear errors, are presented. A comparison is made of the solution of the calibration problem for simulated 64 and 729 experiments, in which the generalized coordinates - the angles in the joints took the values ±90° and -90°, 0, +90°.


2021 ◽  
Vol 12 ◽  
Author(s):  
Bahram Parvinian ◽  
Ramin Bighamian ◽  
Christopher George Scully ◽  
Jin-Oh Hahn ◽  
Pras Pathmanathan

Subject-specific mathematical models for prediction of physiological parameters such as blood volume, cardiac output, and blood pressure in response to hemorrhage have been developed. In silico studies using these models may provide an effective tool to generate pre-clinical safety evidence for medical devices and help reduce the size and scope of animal studies that are performed prior to initiation of human trials. To achieve such a goal, the credibility of the mathematical model must be established for the purpose of pre-clinical in silico testing. In this work, the credibility of a subject-specific mathematical model of blood volume kinetics intended to predict blood volume response to hemorrhage and fluid resuscitation during fluid therapy was evaluated. A workflow was used in which: (i) the foundational properties of the mathematical model such as structural identifiability were evaluated; (ii) practical identifiability was evaluated both pre- and post-calibration, with the pre-calibration results used to determine an optimal splitting of experimental data into calibration and validation datasets; (iii) uncertainty in model parameters and the experimental uncertainty were quantified for each subject; and (iv) the uncertainty was propagated through the blood volume kinetics model and its predictive capability was evaluated via validation tests. The mathematical model was found to be structurally identifiable. Pre-calibration identifiability analysis led to splitting the 180 min of time series data per subject into 50 and 130 min calibration and validation windows, respectively. The average root mean squared error of the mathematical model was 12.6% using the calibration window of (0 min, 50 min). Practical identifiability was established post-calibration after fixing one of the parameters to a nominal value. In the validation tests, 82 and 75% of the subject-specific mathematical models were able to correctly predict blood volume response when predictive capability was evaluated at 180 min and at the time when amount of infused fluid equals fluid loss.


Author(s):  
Chenzhao Li ◽  
Sankaran Mahadevan

Model calibration and validation are two activities in system model development, and both of them make use of test data. Limited testing budget creates the challenge of test resource allocation, i.e., how to optimize the number of calibration and validation tests to be conducted. Test resource allocation is conducted before any actual test is performed, and therefore needs to use synthetic data. This paper develops a test resource allocation methodology to make the system response prediction “robust” to test outcome, i.e., insensitive to the variability in test outcome; therefore, consistent system response predictions can be achieved under different test outcomes. This paper analyzes the uncertainty sources in the generation of synthetic data regarding different test conditions, and concludes that the robustness objective can be achieved if the contribution of model parameter uncertainty in the synthetic data can be maximized. Global sensitivity analysis (Sobol’ index) is used to assess this contribution, and to formulate an optimization problem to achieve the desired consistent system response prediction. A simulated annealing algorithm is applied to solve this optimization problem. The proposed method is suitable either when only model calibration tests are considered or when both calibration and validation tests are considered. Two numerical examples are provided to demonstrate the proposed approach.


Author(s):  
Joshua Mullins ◽  
Sankaran Mahadevan ◽  
Angel Urbina

Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecise data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. The proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.


Author(s):  
K.L. More ◽  
R.A. Lowden ◽  
T.M. Besmann

Silicon nitride possesses an attractive combination of thermo-mechanical properties which makes it a strong candidate material for many structural ceramic applications. Unfortunately, many of the conventional processing techniques used to produce Si3N4, such as hot-pressing, sintering, and hot-isostatic pressing, utilize significant amounts of densification aids (Y2O3, Al2O3, MgO, etc.) which ultimately lowers the utilization temperature to well below that of pure Si3N4 and also decreases the oxidation resistance. Chemical vapor deposition (CVD) is an alternative processing method for producing pure Si3N4. However, deposits made at temperatures less than ~1200°C are usually amorphous and at slightly higher temperatures, the deposition of crystalline material requires extremely low deposition rates (~5 μm/h). Niihara and Hirai deposited crystalline α-Si3N4 at 1400°C at a deposition rate of ~730 μm/h. Hirai and Hayashi successfully lowered the CVD temperature for the growth of crystalline Si3N4 by adding TiCl4 vapor to the SiCl4, NH3, and H2 reactants. This resulted in the growth of α-Si3N4 with small amounts of TiN at temperatures as low as 1250°C.


Sign in / Sign up

Export Citation Format

Share Document