Bayesian Information Fusion of Multmodality Nondestructive Measurements for Probabilistic Mechanical Property Estimation

Author(s):  
Jie Chen ◽  
Yongming Liu

Abstract Missing data occur when no data value is available for the variable in an observation. In this research, Bayesian data augmentation method is adopted and implemented for prediction with missing data. The data augmentation process is conducted through Bayesian inference with missing data assuming the multivariate normal distribution. Gibbs sampling is used to draw posterior simulations of the joint distribution of unknown parameters and unobserved quantities. The missing elements of the data are sampled conditional on the observed elements. The distribution of model parameters and variables with missing data can be obtained for reliability analysis. Two examples are given to illustrate the engineering application of Bayesian inference with missing data. The first example is to predict the yield strength of aging pipeline by fusing the incomplete surface information with missing data. The predictive performance is compared among direct surface indentation technique, linear regression with complete data and Bayesian inference with missing data. The second example is to predict the fatigue life of corroded steel reinforcing bar from the incomplete input dataset. The predicted fatigue lives are compared with experimental data. Both examples demonstrate that the Bayesian method can deal with missing data problem properly and show good predictive performance.

Computation ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 91
Author(s):  
Ziheng Zhang ◽  
Nan Chen

Parameter estimation of complex nonlinear turbulent dynamical systems using only partially observed time series is a challenging topic. The nonlinearity and partial observations often impede using closed analytic formulae to recover the model parameters. In this paper, an exact path-wise sampling method is developed, which is incorporated into a Bayesian Markov chain Monte Carlo (MCMC) algorithm in light of data augmentation to efficiently estimate the parameters in a rich class of nonlinear and non-Gaussian turbulent systems using partial observations. This path-wise sampling method exploits closed analytic formulae to sample the trajectories of the unobserved variables, which avoid the numerical errors in the general sampling approaches and significantly increase the overall parameter estimation efficiency. The unknown parameters and the missing trajectories are estimated in an alternating fashion in an adaptive MCMC iteration algorithm with rapid convergence. It is shown based on the noisy Lorenz 63 model and a stochastically coupled FitzHugh–Nagumo model that the new algorithm is very skillful in estimating the parameters in highly nonlinear turbulent models. The model with the estimated parameters succeeds in recovering the nonlinear and non-Gaussian features of the truth, including capturing the intermittency and extreme events, in both test examples.


Author(s):  
Zhijian Hou ◽  
Ming Qu ◽  
Zhirui Wang

The performance of cooling coil unit (CCU) can directly influence the performance of ventilating and air conditioning (HVAC) system. In this paper, a dynamic CCU model was obtained by identifying the unknown parameters of existed effectiveness model. Five different conditions information is used to identify five model parameters by an optimal method. Unlike existed effectiveness model, the identified model can be simply determined by flow rate of chilled water, the temperature and humidity of return air and temperature of supply chilled water without requiring geometric specifications, which is very convenient in real engineering application. It was validated by five different experiment conditions on a CCU. The experiment results show that the identified model has a high accuracy despite changing the temperature and volume of chilled water.


2017 ◽  
Vol 14 (134) ◽  
pp. 20170340 ◽  
Author(s):  
Aidan C. Daly ◽  
Jonathan Cooper ◽  
David J. Gavaghan ◽  
Chris Holmes

Bayesian methods are advantageous for biological modelling studies due to their ability to quantify and characterize posterior variability in model parameters. When Bayesian methods cannot be applied, due either to non-determinism in the model or limitations on system observability, approximate Bayesian computation (ABC) methods can be used to similar effect, despite producing inflated estimates of the true posterior variance. Owing to generally differing application domains, there are few studies comparing Bayesian and ABC methods, and thus there is little understanding of the properties and magnitude of this uncertainty inflation. To address this problem, we present two popular strategies for ABC sampling that we have adapted to perform exact Bayesian inference, and compare them on several model problems. We find that one sampler was impractical for exact inference due to its sensitivity to a key normalizing constant, and additionally highlight sensitivities of both samplers to various algorithmic parameters and model conditions. We conclude with a study of the O'Hara–Rudy cardiac action potential model to quantify the uncertainty amplification resulting from employing ABC using a set of clinically relevant biomarkers. We hope that this work serves to guide the implementation and comparative assessment of Bayesian and ABC sampling techniques in biological models.


2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Nadin Ulrich ◽  
Kai-Uwe Goss ◽  
Andrea Ebert

AbstractToday more and more data are freely available. Based on these big datasets deep neural networks (DNNs) rapidly gain relevance in computational chemistry. Here, we explore the potential of DNNs to predict chemical properties from chemical structures. We have selected the octanol-water partition coefficient (log P) as an example, which plays an essential role in environmental chemistry and toxicology but also in chemical analysis. The predictive performance of the developed DNN is good with an rmse of 0.47 log units in the test dataset and an rmse of 0.33 for an external dataset from the SAMPL6 challenge. To this end, we trained the DNN using data augmentation considering all potential tautomeric forms of the chemicals. We further demonstrate how DNN models can help in the curation of the log P dataset by identifying potential errors, and address limitations of the dataset itself.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2402
Author(s):  
David S. Ching ◽  
Cosmin Safta ◽  
Thomas A. Reichardt

Bayesian inference is used to calibrate a bottom-up home PLC network model with unknown loads and wires at frequencies up to 30 MHz. A network topology with over 50 parameters is calibrated using global sensitivity analysis and transitional Markov Chain Monte Carlo (TMCMC). The sensitivity-informed Bayesian inference computes Sobol indices for each network parameter and applies TMCMC to calibrate the most sensitive parameters for a given network topology. A greedy random search with TMCMC is used to refine the discrete random variables of the network. This results in a model that can accurately compute the transfer function despite noisy training data and a high dimensional parameter space. The model is able to infer some parameters of the network used to produce the training data, and accurately computes the transfer function under extrapolative scenarios.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


Author(s):  
R. Chander ◽  
M. Meyyappa ◽  
S. Hanagud

Abstract A frequency domain identification technique applicable to damped distributed structural dynamic systems is presented. The technique is developed for beams whose behavior can be modeled using the Euler-Bernoulli beam theory. External damping of the system is included by means of a linear viscous damping model. Parameters to be identified, mass, stiffness and damping distributions are assumed to be continuous functions over the beam. The response at a discrete number of points along the length of the beam for a given forcing function is used as the data for identification. The identification scheme involves approximating the infinite dimensional response and parameter spaces by using quintic B-splines and cubic cardinal splines, respectively. A Galerkin type weighted residual procedure, in conjunction with the least squares technique, is employed to determine the unknown parameters. Numerically simulated response data for an applied impulse load are utilized to validate the developed technique. Estimated values for the mass, stiffness and damping distributions are discussed.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Minh Thanh Vo ◽  
Anh H. Vo ◽  
Tuong Le

PurposeMedical images are increasingly popular; therefore, the analysis of these images based on deep learning helps diagnose diseases become more and more essential and necessary. Recently, the shoulder implant X-ray image classification (SIXIC) dataset that includes X-ray images of implanted shoulder prostheses produced by four manufacturers was released. The implant's model detection helps to select the correct equipment and procedures in the upcoming surgery.Design/methodology/approachThis study proposes a robust model named X-Net to improve the predictability for shoulder implants X-ray image classification in the SIXIC dataset. The X-Net model utilizes the Squeeze and Excitation (SE) block integrated into Residual Network (ResNet) module. The SE module aims to weigh each feature map extracted from ResNet, which aids in improving the performance. The feature extraction process of X-Net model is performed by both modules: ResNet and SE modules. The final feature is obtained by incorporating the extracted features from the above steps, which brings more important characteristics of X-ray images in the input dataset. Next, X-Net uses this fine-grained feature to classify the input images into four classes (Cofield, Depuy, Zimmer and Tornier) in the SIXIC dataset.FindingsExperiments are conducted to show the proposed approach's effectiveness compared with other state-of-the-art methods for SIXIC. The experimental results indicate that the approach outperforms the various experimental methods in terms of several performance metrics. In addition, the proposed approach provides the new state of the art results in all performance metrics, such as accuracy, precision, recall, F1-score and area under the curve (AUC), for the experimental dataset.Originality/valueThe proposed method with high predictive performance can be used to assist in the treatment of injured shoulder joints.


Sign in / Sign up

Export Citation Format

Share Document