Investigation of Fan Blade off Events Using a Bayesian Framework

Author(s):  
B. Profir ◽  
M. H. Eres ◽  
J. P. Scanlan ◽  
R. Bates

This paper illustrates a probabilistic method of studying Fan Blade Off (FBO) events which is based upon Bayesian inference. Investigating this case study is of great interest from the point of view of the engineering team responsible with the dynamic modelling of the fan. The reason is because subsequent to an FBO event, the fan loses its axisymmetry and as a result of that, severe impacting can occur between the blades and the inner casing of the engine. The mechanical modelling (which is not the scope of this paper) involves studying the oscillation modes of the fan at various release speeds (defined as the speed at which an FBO event occurs) and at various amounts of damage (defined as the percentage of blade which gets released during an FBO event). However, it is virtually infeasible to perform the vibrational analysis for all combinations of release speed and damage. Consequently, the Bayesian updating which forms the foundation of the framework presented in the paper is used to identify the most likely combinations prone to occur after an FBO event which are then going to be used further for the mechanical analysis. The Bayesian inference engine presented here makes use of expert judgements which are updated using in-service data (which for the purposes of this paper are fictitious). The resulting inputs are then passed through 1,000,000 Monte Carlo iterations (which from a physical standpoint represent the number of FBO events simulated) in order to check which are the most common combinations of release speed and blade damage so as to report back to the mechanical engineering team. Therefore, the scope of the project outlined in this paper is to create a flexible model which changes every time data becomes available in order to reflect both the original expert judgements it was based on as well as the real data itself. The features of interest of the posterior distributions which can be seen in the Results section are the peaks of the probability distributions. The reason for this has already been outlined: only the most likely FBO events (i.e.: the peaks of the distributions) are of interest for the purposes of the dynamics analysis. Even though it may be noticed that the differences between prior and posterior distributions are not pronounced, it should be recalled that this is due to the particular data set used for the update; using another data set or adding to the existing one will produce different distributions.

2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2018 ◽  
Vol 21 (08) ◽  
pp. 1850054 ◽  
Author(s):  
DAVID BAUDER ◽  
TARAS BODNAR ◽  
STEPAN MAZUR ◽  
YAREMA OKHRIN

In this paper, we consider the estimation of the weights of tangent portfolios from the Bayesian point of view assuming normal conditional distributions of the logarithmic returns. For diffuse and conjugate priors for the mean vector and the covariance matrix, we derive stochastic representations for the posterior distributions of the weights of tangent portfolio and their linear combinations. Separately, we provide the mean and variance of the posterior distributions, which are of key importance for portfolio selection. The analytic results are evaluated within a simulation study, where the precision of coverage intervals is assessed.


2018 ◽  
Vol 34 (3) ◽  
pp. 1247-1266 ◽  
Author(s):  
Hua Kang ◽  
Henry V. Burton ◽  
Haoxiang Miao

Post-earthquake recovery models can be used as decision support tools for pre-event planning. However, due to a lack of available data, there have been very few opportunities to validate and/or calibrate these models. This paper describes the use of building damage, permitting, and repair data from the 2014 South Napa Earthquake to evaluate a stochastic process post-earthquake recovery model. Damage data were obtained for 1,470 buildings, and permitting and repair time data were obtained for a subset (456) of those buildings. A “blind” prediction is shown to adequately capture the shape of the recovery trajectory despite overpredicting the overall pace of the recovery. Using the mean time to permit and repair time from the acquired data set significantly improves the accuracy of the recovery prediction. A generalized model is formulated by establishing statistical relationships between key time parameters and endogenous and exogenous factors that have been shown to influence the pace of recovery.


1993 ◽  
Vol 28 (11-12) ◽  
pp. 9-14 ◽  
Author(s):  
Troy D. Vassos

The need to optimize treatment plant performance and to meet increasingly stringent effluent criteria are two key factors affecting future development of instrumentation, control and automation (ICA) applications in the water and wastewater industry. Two case studies are presented which highlight the need for dynamic modelling and simulation software to assist operations staff in developing effective instrumentation control strategies, and to provide a training environment for the evaluation of such strategies. One of the limiting factors to date in realizing the potential benefits of ICA has been the inability to adequately interpret the large number of existing instrumentation inputs available at treatment facilities. The number of inputs can exceed the number of control loops by up to three orders of magnitude. The integration of dynamic modelling and expert system software is seen to facilitate the interpretation of real-time data, allowing both quantitative (instrumented) and qualitative (operator input) information to be integrated for process control. Improvements in sensor reliability and performance, and the development of biological monitoring sensors and control algorithms are also discussed.


2010 ◽  
Vol 14 (3) ◽  
pp. 545-556 ◽  
Author(s):  
J. Rings ◽  
J. A. Huisman ◽  
H. Vereecken

Abstract. Coupled hydrogeophysical methods infer hydrological and petrophysical parameters directly from geophysical measurements. Widespread methods do not explicitly recognize uncertainty in parameter estimates. Therefore, we apply a sequential Bayesian framework that provides updates of state, parameters and their uncertainty whenever measurements become available. We have coupled a hydrological and an electrical resistivity tomography (ERT) forward code in a particle filtering framework. First, we analyze a synthetic data set of lysimeter infiltration monitored with ERT. In a second step, we apply the approach to field data measured during an infiltration event on a full-scale dike model. For the synthetic data, the water content distribution and the hydraulic conductivity are accurately estimated after a few time steps. For the field data, hydraulic parameters are successfully estimated from water content measurements made with spatial time domain reflectometry and ERT, and the development of their posterior distributions is shown.


2021 ◽  
Author(s):  
E. Noviyanto

This paper presents a probabilistic modeling and prediction workflow to capture the range of uncertainties and its application in a field with many wells and long history. A static model consisting of 19 layers and 293 wells was imported as the base model. Several reservoir properties such as relative permeability, PVT, aquifer, and initial condition were analyzed to obtain the range of uncertainties. The probabilistic history matching was done using Assisted History Matching (AHM) tools and divided into experimental design and optimization phases. The inputted parameters and their range sensitive to objective functions, e.g., oil rate/total difference, could be determined using a Pareto chart based on Pearson Correlation during experimental design. The optimization phase carried over the most sensitive parameters. It utilized Particle Swarm Optimization (PSO) algorithm to iterate the process and find the equiprobable models with minimum objective functions. After filtering a set of models created by AHM tools by the total oil production, field/well oil objective functions, the last three years' performance, and clustering using the k-means algorithm, there are 11 models left. These models were then analyzed to understand the final risk and parameter uncertainties, e.g., mobile oil or sweep efficiency. Three models representing P10, P50, and P90 were picked and used as the base models for developing waterflood scenario designs. Several scenarios were done, such as base case, perfect pattern case, and existing well case. The oil incremental is in the range of 1.60 – 2.01 MMSTB for the Base Case, 7.57 – 9.14 MMSTB for the Perfect Pattern Case, and 6.01 – 7.75 MMSTB for the Existing Well Case. This paper introduces the application of the probabilistic method for history matching and prediction. This method can engage the uncertainty of the dynamic model on the forecasted production profiles. In the end, this information could improve the quality of management decision-making in field development planning.


2021 ◽  
Author(s):  
Ecko Noviyanto ◽  
Deded Abdul Rohman ◽  
Theoza Nopranda ◽  
Rudini Simanjorang ◽  
Kosdar Gideon Haro ◽  
...  

Abstract This paper presents a probabilistic modeling and prediction workflow to capture the range of uncertainties and its application in a field with many wells and long history. A static model consisting of 19 layers and 293 wells was imported as the base model. Several reservoir properties such as relative permeability, PVT, aquifer, and initial condition were analyzed to obtain the range of uncertainties. The probabilistic history matching was done using Assisted History Matching (AHM) tools and divided into experimental design and optimization. The inputted parameters and their range sensitive to objective functions, e.g., oil rate/total difference, could be determined using a Pareto chart based on Pearson Correlation during experimental design. The optimization phase carried over the most sensitive parameters and utilized Particle Swarm Optimization (PSO) algorithm to iterate the process and find the equiprobable models with minimum objective functions. After filtering a set of models created by AHM tools by the total oil production, field/well oil objective functions, the last three years' performance, and clustering using the k-means algorithm, there are 11 models left. These models were then analyzed to understand the absolute risk and parameter uncertainties, e.g., mobile oil or sweep efficiency. Three models representing P10, P50, and P90 were picked and used as the base models for developing waterflood scenario designs. Several scenarios were done, such as base case, perfect pattern case, and existing well case. The oil incremental is in the range of 1.60 – 2.01 MMSTB for the Base Case, 7.57 – 9.14 MMSTB for the Perfect Pattern Case, and 6.01 – 7.75 MMSTB for the Existing Well Case. This paper introduces the application of the probabilistic method for history matching and prediction. This method can engage the uncertainty of the dynamic model on the forecasted production profiles. In the end, this information could improve the quality of management decision-making in field development planning.


Stats ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 111-120 ◽  
Author(s):  
Dewi Rahardja

We construct a point and interval estimation using a Bayesian approach for the difference of two population proportion parameters based on two independent samples of binomial data subject to one type of misclassification. Specifically, we derive an easy-to-implement closed-form algorithm for drawing from the posterior distributions. For illustration, we applied our algorithm to a real data example. Finally, we conduct simulation studies to demonstrate the efficiency of our algorithm for Bayesian inference.


2019 ◽  
Vol 491 (4) ◽  
pp. 5238-5247 ◽  
Author(s):  
X Saad-Olivera ◽  
C F Martinez ◽  
A Costa de Souza ◽  
F Roig ◽  
D Nesvorný

ABSTRACT We characterize the radii and masses of the star and planets in the Kepler-59 system, as well as their orbital parameters. The star parameters are determined through a standard spectroscopic analysis, resulting in a mass of $1.359\pm 0.155\, \mathrm{M}_\odot$ and a radius of $1.367\pm 0.078\, \mathrm{R}_\odot$. The obtained planetary radii are $1.5\pm 0.1\, R_\oplus$ for the inner and $2.2\pm 0.1\, R_\oplus$ for the outer planet. The orbital parameters and the planetary masses are determined by the inversion of Transit Timing Variations (TTV) signals. We consider two different data sets: one provided by Holczer et al. (2016), with TTVs only for Kepler-59c, and the other provided by Rowe et al. (2015), with TTVs for both planets. The inversion method applies an algorithm of Bayesian inference (MultiNest) combined with an efficient N-body integrator (Swift). For each of the data set, we found two possible solutions, both having the same probability according to their corresponding Bayesian evidences. All four solutions appear to be indistinguishable within their 2-σ uncertainties. However, statistical analyses show that the solutions from Rowe et al. (2015) data set provide a better characterization. The first solution infers masses of $5.3_{-2.1}^{+4.0}~M_{\mathrm{\oplus }}$ and $4.6_{-2.0}^{+3.6}~M_{\mathrm{\oplus }}$ for the inner and outer planet, respectively, while the second solution gives masses of $3.0^{+0.8}_{-0.8}~M_{\mathrm{\oplus }}$ and $2.6^{+0.9}_{-0.8}~M_{\mathrm{\oplus }}$. These values point to a system with an inner super-Earth and an outer mini-Neptune. A dynamical study shows that the planets have almost co-planar orbits with small eccentricities (e < 0.1), close to the 3:2 mean motion resonance. A stability analysis indicates that this configuration is stable over million years of evolution.


Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1942
Author(s):  
Andrés R. Masegosa ◽  
Darío Ramos-López ◽  
Antonio Salmerón ◽  
Helge Langseth ◽  
Thomas D. Nielsen

In many modern data analysis problems, the available data is not static but, instead, comes in a streaming fashion. Performing Bayesian inference on a data stream is challenging for several reasons. First, it requires continuous model updating and the ability to handle a posterior distribution conditioned on an unbounded data set. Secondly, the underlying data distribution may drift from one time step to another, and the classic i.i.d. (independent and identically distributed), or data exchangeability assumption does not hold anymore. In this paper, we present an approximate Bayesian inference approach using variational methods that addresses these issues for conjugate exponential family models with latent variables. Our proposal makes use of a novel scheme based on hierarchical priors to explicitly model temporal changes of the model parameters. We show how this approach induces an exponential forgetting mechanism with adaptive forgetting rates. The method is able to capture the smoothness of the concept drift, ranging from no drift to abrupt drift. The proposed variational inference scheme maintains the computational efficiency of variational methods over conjugate models, which is critical in streaming settings. The approach is validated on four different domains (energy, finance, geolocation, and text) using four real-world data sets.


Sign in / Sign up

Export Citation Format

Share Document