scholarly journals Embedded Error Bayesian Calibration of Thermal Decomposition of Organic Materials

Author(s):  
Ari L Frankel ◽  
Ellen Wagman ◽  
Ryan Keedy ◽  
Brent C. Houchens ◽  
Sarah Scott

Abstract Organic materials are an attractive choice for structural components due to their light weight and versatility. However, because they decompose at low temperatures relative to tradiational materials they pose a safety risk due to fire and loss of structural integrity. To quantify this risk, analysts use chem- ical kinetics models to describe the material pyrolysis and oxidation using thermogravimetric analysis. This process requires the calibration of many model parameters to closely match experimental data. Previous e?orts in this field have largely been limited to finding a single best-fit set of parame- ters even though the experimental data may be very noisy. Furthermore the chemical kinetics models are often simplified representations of the true de- composition process. The simplification induces model-form errors that the fitting process cannot capture. In this work we propose a methodology for calibrating decomposition models to thermogravimetric analysis data that accounts for uncertainty in the model-form and experimental data simul- taneously. The methodology is applied to the decomposition of a carbon fiber epoxy composite with a three-stage reaction network and Arrhenius kinetics. The results show a good overlap between the model predictions and thermogravimetric analysis data. Uncertainty bounds capture devia- tions of the model from the data. The calibrated parameter distributions are also presented. The distributions may be used in forward propagation of uncertainty in models that leverage this material.

2019 ◽  
Vol 29 (4) ◽  
pp. 480-495
Author(s):  
Olga G. Kantor ◽  
Semen I. Spivak ◽  
Nikolay D. Morozkin

Introduction. The model of a given structure should be identified based on the results of solving the problem of parametric identification. This model should provide the best possible the database development reproduction of the experimental data. The concept of “best” is not strictly structured. Therefore, the procedure for identifying such a model is subject to natural logic and includes the stages of data a determination of a set of acceptable models and subsequent selection of the best of them. If the set of acceptable models is large, the procedure for determining the best one can be time-consuming. In this regard, the development of methods for parametric identification, which at the stage of creating a set of acceptable models allows taking into account the qualitative aspects of the identified dependence, which are of interest to the researcher, is of particular importance. Materials and Methods. The set of acceptable methods in the problems of parametric identification largely depends on the type of the experimental data. Uncertainty for example, probabilistic and statistical methods are useful if the observed factors are random and subject to any law of probability distribution. If the conditions for the use of such methods are not met, it may be useful to present an approach based on identifying the boundaries of location of the model parameters that ensure the achievement of specified levels of quality characteristics. Results. The procedure of parametric identification of models is formalized. It is based on the use of maximum permissible parameter estimates and allows one to determining the set of parameter values that guarantee the achievement of the required qualitative level of experimental data description, including from the standpoint of analyzing the impact of changes in accord with requirements to the accuracy of their reproduction. The approbation of the developed method on the example of the construction of a one-factor model of chemical kinetics is presented. Discussion and Conclusion. It is shown that the obtained value of the chemical reaction rate constant, in accordance with the introduced criteria, provides acceptable accuracy, adequacy, and stability of the identified kinetic model. At the same time, the results of calculations revealed the information that can form the basis for planning experiments carried out in order to improve the accuracy of the experimental data.


2019 ◽  
Vol 26 ◽  
pp. 228
Author(s):  
C. Fakiola ◽  
I. Karakasis ◽  
I. Sideris ◽  
A. Khaliel ◽  
T. J. Mertzimekis

About 35 nuclides which lie on the neutron deficient side of the isotopic chart cannot be created by the two basic nucleosynthetic processes, the sand the rprocess. Due to scarce experimental data and the vast complexity of the reaction network involved, cross sections and reactions are estimated theoretically, using the Hauser–Feshbach statistical model. In the present work, theoretical calculations of cross sections of radiative α-capture reactions on the neutron–deficient Erbium and Xenon isotopes are presented in an attempt to make predictions inside the astrophysically relevant energy window (Gamow). The particular reactions are predicted to be sensitive branchings in the γprocess path.The most recent versions of TALYS (v1.9) and Fresco codes were employed for all calculations, initially focusing on investigating the influence of the default eight (8) α–nucleus optical potential models of TALYS on reaction cross sections. The theoretical results of both codes are compared and for the reactions where experimental data exist in literature, the optical model parameters were adjusted appropriately to best describe the data and were subsequently used for estimating (α,γ) reaction cross sections. Predictions for the (α,n) reaction channels have also been calculated and studied.


2019 ◽  
Author(s):  
Michael A. Kochen ◽  
Carlos F. Lopez

AbstractMathematical models of biochemical reaction networks are central to the study of dynamic cellular processes and hypothesis generation that informs experimentation and validation. Unfortunately, model parameters are often not available and sparse experimental data leads to challenges in model calibration and parameter estimation. This can in turn lead to unreliable mechanistic interpretations of experimental data and the generation of poorly conceived hypotheses for experimental validation. To address this challenge, we evaluate whether a Bayesian-inspired probability-based approach, that incorporates available information regarding reaction network topology and parameters, can be used to qualitatively explore hypothetical biochemical network execution mechanisms in the context of limited available data. We test our approach on a model of extrinsic apoptosis execution to identify preferred signal execution modes across varying conditions. Apoptosis signal processing can take place either through a mitochondria independent (Type I) mode or a mitochondria dependent (Type II) mode. We first show that in silico knockouts, represented by model subnetworks, successfully identify the most likely execution mode for specific concentrations of key molecular regulators. We then show that changes in molecular regulator concentrations alter the overall reaction flux through the network by shifting the primary route of signal flow between the direct caspase and mitochondrial pathways. Our work thus demonstrates that probabilistic approaches can be used to explore the qualitative dynamic behavior of model biochemical systems even with missing or sparse data.


1992 ◽  
Vol 23 (2) ◽  
pp. 89-104 ◽  
Author(s):  
Ole H. Jacobsen ◽  
Feike J. Leij ◽  
Martinus Th. van Genuchten

Breakthrough curves of Cl and 3H2O were obtained during steady unsaturated flow in five lysimeters containing an undisturbed coarse sand (Orthic Haplohumod). The experimental data were analyzed in terms of the classical two-parameter convection-dispersion equation and a four-parameter two-region type physical nonequilibrium solute transport model. Model parameters were obtained by both curve fitting and time moment analysis. The four-parameter model provided a much better fit to the data for three soil columns, but performed only slightly better for the two remaining columns. The retardation factor for Cl was about 10 % less than for 3H2O, indicating some anion exclusion. For the four-parameter model the average immobile water fraction was 0.14 and the Peclet numbers of the mobile region varied between 50 and 200. Time moments analysis proved to be a useful tool for quantifying the break through curve (BTC) although the moments were found to be sensitive to experimental scattering in the measured data at larger times. Also, fitted parameters described the experimental data better than moment generated parameter values.


Author(s):  
Afshin Anssari-Benam ◽  
Andrea Bucchi ◽  
Giuseppe Saccomandi

AbstractThe application of a newly proposed generalised neo-Hookean strain energy function to the inflation of incompressible rubber-like spherical and cylindrical shells is demonstrated in this paper. The pressure ($P$ P ) – inflation ($\lambda $ λ or $v$ v ) relationships are derived and presented for four shells: thin- and thick-walled spherical balloons, and thin- and thick-walled cylindrical tubes. Characteristics of the inflation curves predicted by the model for the four considered shells are analysed and the critical values of the model parameters for exhibiting the limit-point instability are established. The application of the model to extant experimental datasets procured from studies across 19th to 21st century will be demonstrated, showing favourable agreement between the model and the experimental data. The capability of the model to capture the two characteristic instability phenomena in the inflation of rubber-like materials, namely the limit-point and inflation-jump instabilities, will be made evident from both the theoretical analysis and curve-fitting approaches presented in this study. A comparison with the predictions of the Gent model for the considered data is also demonstrated and is shown that our presented model provides improved fits. Given the simplicity of the model, its ability to fit a wide range of experimental data and capture both limit-point and inflation-jump instabilities, we propose the application of our model to the inflation of rubber-like materials.


Author(s):  
Julija Kazakeviciute ◽  
James Paul Rouse ◽  
Davide Focatiis ◽  
Christopher Hyde

Small specimen mechanical testing is an exciting and rapidly developing field in which fundamental deformation behaviours can be observed from experiments performed on comparatively small amounts of material. These methods are particularly useful when there is limited source material to facilitate a sufficient number of standard specimen tests, if any at all. Such situations include the development of new materials or when performing routine maintenance/inspection studies of in-service components, requiring that material conditions are updated with service exposure. The potentially more challenging loading conditions and complex stress states experienced by small specimens, in comparison with standard specimen geometries, has led to a tendency for these methods to be used in ranking studies rather than for fundamental material parameter determination. Classifying a specimen as ‘small’ can be subjective, and in the present work the focus is to review testing methods that utilise specimens with characteristic dimensions of less than 50 mm. By doing this, observations made here will be relevant to industrial service monitoring problems, wherein small samples of material are extracted and tested from operational components in such a way that structural integrity is not compromised. Whilst recently the majority of small specimen test techniques development have focused on the determination of creep behaviour/properties as well as sub-size tensile testing, attention is given here to small specimen testing methods for determining specific tensile, fatigue, fracture and crack growth properties. These areas are currently underrepresented in published reviews. The suitability of specimens and methods is discussed here, along with associated advantages and disadvantages.


1978 ◽  
Vol 100 (1) ◽  
pp. 20-24 ◽  
Author(s):  
R. H. Rand

A one-dimensional, steady-state, constant temperature model of diffusion and absorption of CO2 in the intercellular air spaces of a leaf is presented. The model includes two geometrically distinct regions of the leaf interior, corresponding to palisade and spongy mesophyll tissue, respectively. Sun, shade, and intermediate light leaves are modeled by varying the thicknesses of these two regions. Values of the geometric model parameters are obtained by comparing geometric properties of the model with experimental data of other investigators found from dissection of real leaves. The model provides a quantitative estimate of the extent to which the concentration of gaseous CO2 varies locally within the leaf interior.


Author(s):  
Aniruddha Choudhary ◽  
Ian T. Voyles ◽  
Christopher J. Roy ◽  
William L. Oberkampf ◽  
Mayuresh Patil

Our approach to the Sandia Verification and Validation Challenge Problem is to use probability bounds analysis (PBA) based on probabilistic representation for aleatory uncertainties and interval representation for (most) epistemic uncertainties. The nondeterministic model predictions thus take the form of p-boxes, or bounding cumulative distribution functions (CDFs) that contain all possible families of CDFs that could exist within the uncertainty bounds. The scarcity of experimental data provides little support for treatment of all uncertain inputs as purely aleatory uncertainties and also precludes significant calibration of the models. We instead seek to estimate the model form uncertainty at conditions where the experimental data are available, then extrapolate this uncertainty to conditions where no data exist. The modified area validation metric (MAVM) is employed to estimate the model form uncertainty which is important because the model involves significant simplifications (both geometric and physical nature) of the true system. The results of verification and validation processes are treated as additional interval-based uncertainties applied to the nondeterministic model predictions based on which the failure prediction is made. Based on the method employed, we estimate the probability of failure to be as large as 0.0034, concluding that the tanks are unsafe.


2018 ◽  
Vol 37 (1) ◽  
pp. 544-557 ◽  
Author(s):  
Alejandra Saffe ◽  
Anabel Fernandez ◽  
Germán Mazza ◽  
Rosa Rodriguez

The use of energy from biomass is becoming more common worldwide. This energy source has several benefits that promote its acceptance; it is bio-renewable, non-toxic and biodegradable. To predict its behavior as a fuel during thermal treatment, its characterization is necessary. The experimental determination of ultimate analysis data requires special instrumentation, while proximate analysis data can be obtained easily by using common equipment but, the required time is high. In this work, a methodology is applied based on thermogravimetric analysis, curves deconvolution and empirical correlations for characterizing different regional agro-industrial wastes to determine the high heating value, the contents of moisture, volatiles matter, fixed carbon, ash, carbon, hydrogen, oxygen, lignin, cellulose and hemicellulose. The obtained results are similar to those using standard techniques, showing the accuracy of proposed method and its wide application range. This methodology allows to determine the main parameters required for industrial operation in only in one step, saving time.


Sign in / Sign up

Export Citation Format

Share Document