scholarly journals Sensitivity of Algorithms for Estimating the Gravity Disturbance Vector to Its Model Uncertainty

2021 ◽  
Author(s):  
O. A. Stepanov ◽  
D. A. Koshaev ◽  
O. M. Yashnikova ◽  
A. V Motorin ◽  
L. P. Staroseltsev

AbstractThe work considers the results of filtering and smoothing of the gravity disturbance vector horizontal components and focuses on the sensitivity of these results to the model parameters in the case when the inertial-geodesic method is applied in the framework of a marine survey on a sea vessel.

Geophysics ◽  
1988 ◽  
Vol 53 (10) ◽  
pp. 1355-1361 ◽  
Author(s):  
Steven J. Brzezowski ◽  
Warren G. Heller

Gradiometer system noise, sampling effects, downward continuation, and limited data extent are the important contributors to moving‐base gravity gradiometer survey error. We apply a two‐dimensional frequency‐domain approach in simulations of several sets of airborne survey conditions to assess the significance of the first two sources. A special error allocation technique is used to account for the downward continuation and limited extent effects. These two sources cannot be modeled adequately as measurement noise in a linear error estimation algorithm. For a typical characterization of the Earth’s gravity field, our modeling indicates that limited data extent generally contributes about one‐half of the total error variance associated with recovery of the gravity disturbance vector at the Earth’s surface; gradiometer system noise typically contributes about one‐third. However, sampling effects are also very important (and are controlled through the survey track spacing). A 5 km track spacing provides a reasonable tradeoff between survey cost and errors due to track spacing. Furthermore, our results indicate that a moving‐base gravity gradiometer system can recover each component of the gravity disturbance vector with an rms accuracy better than 1.0 mGal.


2005 ◽  
Vol 128 (3) ◽  
pp. 626-635 ◽  
Author(s):  
Gregory D. Buckner ◽  
Heeju Choi ◽  
Nathan S. Gibson

Robust control techniques require a dynamic model of the plant and bounds on model uncertainty to formulate control laws with guaranteed stability. Although techniques for modeling dynamic systems and estimating model parameters are well established, very few procedures exist for estimating uncertainty bounds. In the case of H∞ control synthesis, a conservative weighting function for model uncertainty is usually chosen to ensure closed-loop stability over the entire operating space. The primary drawback of this conservative, “hard computing” approach is reduced performance. This paper demonstrates a novel “soft computing” approach to estimate bounds of model uncertainty resulting from parameter variations, unmodeled dynamics, and nondeterministic processes in dynamic plants. This approach uses confidence interval networks (CINs), radial basis function networks trained using asymmetric bilinear error cost functions, to estimate confidence intervals associated with nominal models for robust control synthesis. This research couples the “hard computing” features of H∞ control with the “soft computing” characteristics of intelligent system identification, and realizes the combined advantages of both. Simulations and experimental demonstrations conducted on an active magnetic bearing test rig confirm these capabilities.


2018 ◽  
Vol 18 (13) ◽  
pp. 9975-10006 ◽  
Author(s):  
Leighton A. Regayre ◽  
Jill S. Johnson ◽  
Masaru Yoshioka ◽  
Kirsty J. Pringle ◽  
David M. H. Sexton ◽  
...  

Abstract. Changes in aerosols cause a change in net top-of-the-atmosphere (ToA) short-wave and long-wave radiative fluxes; rapid adjustments in clouds, water vapour and temperature; and an effective radiative forcing (ERF) of the planetary energy budget. The diverse sources of model uncertainty and the computational cost of running climate models make it difficult to isolate the main causes of aerosol ERF uncertainty and to understand how observations can be used to constrain it. We explore the aerosol ERF uncertainty by using fast model emulators to generate a very large set of aerosol–climate model variants that span the model uncertainty due to 27 parameters related to atmospheric and aerosol processes. Sensitivity analyses shows that the uncertainty in the ToA flux is dominated (around 80 %) by uncertainties in the physical atmosphere model, particularly parameters that affect cloud reflectivity. However, uncertainty in the change in ToA flux caused by aerosol emissions over the industrial period (the aerosol ERF) is controlled by a combination of uncertainties in aerosol (around 60 %) and physical atmosphere (around 40 %) parameters. Four atmospheric and aerosol parameters account for around 80 % of the uncertainty in short-wave ToA flux (mostly parameters that directly scale cloud reflectivity, cloud water content or cloud droplet concentrations), and these parameters also account for around 60 % of the aerosol ERF uncertainty. The common causes of uncertainty mean that constraining the modelled planetary brightness to tightly match satellite observations changes the lower 95 % credible aerosol ERF value from −2.65 to −2.37 W m−2. This suggests the strongest forcings (below around −2.4 W m−2) are inconsistent with observations. These results show that, regardless of the fact that the ToA flux is 2 orders of magnitude larger than the aerosol ERF, the observed flux can constrain the uncertainty in ERF because their values are connected by constrainable process parameters. The key to reducing the aerosol ERF uncertainty further will be to identify observations that can additionally constrain individual parameter ranges and/or combined parameter effects, which can be achieved through sensitivity analysis of perturbed parameter ensembles.


2011 ◽  
Vol 21 (8) ◽  
pp. 1128-1153 ◽  
Author(s):  
Shun-Peng Zhu ◽  
Hong-Zhong Huang ◽  
Victor Ontiveros ◽  
Li-Ping He ◽  
Mohammad Modarres

Probabilistic methods have been widely used to account for uncertainty of various sources in predicting fatigue life for components or materials. The Bayesian approach can potentially give more complete estimates by combining test data with technological knowledge available from theoretical analyses and/or previous experimental results, and provides for uncertainty quantification and the ability to update predictions based on new data, which can save time and money. The aim of the present article is to develop a probabilistic methodology for low cycle fatigue life prediction using an energy-based damage parameter with Bayes’ theorem and to demonstrate the use of an efficient probabilistic method, moreover, to quantify model uncertainty resulting from creation of different deterministic model parameters. For most high-temperature structures, more than one model was created to represent the complicated behaviors of materials at high temperature. The uncertainty involved in selecting the best model from among all the possible models should not be ignored. Accordingly, a black-box approach is used to quantify the model uncertainty for three damage parameters (the generalized damage parameter, Smith–Watson–Topper and plastic strain energy density) using measured differences between experimental data and model predictions under a Bayesian inference framework. The verification cases were based on experimental data in the literature for the Ni-base superalloy GH4133 tested at various temperatures. Based on the experimentally determined distributions of material properties and model parameters, the predicted distributions of fatigue life agree with the experimental results. The results show that the uncertainty bounds using the generalized damage parameter for life prediction are tighter than that of Smith–Watson–Topper and plastic strain energy density methods based on the same available knowledge.


Author(s):  
Alexander Matei ◽  
Stefan Ulbrich

AbstractDynamic processes have always been of profound interest for scientists and engineers alike. Often, the mathematical models used to describe and predict time-variant phenomena are uncertain in the sense that governing relations between model parameters, state variables and the time domain are incomplete. In this paper we adopt a recently proposed algorithm for the detection of model uncertainty and apply it to dynamic models. This algorithm combines parameter estimation, optimum experimental design and classical hypothesis testing within a probabilistic frequentist framework. The best setup of an experiment is defined by optimal sensor positions and optimal input configurations which both are the solution of a PDE-constrained optimization problem. The data collected by this optimized experiment then leads to variance-minimal parameter estimates. We develop efficient adjoint-based methods to solve this optimization problem with SQP-type solvers. The crucial test which a model has to pass is conducted over the claimed true values of the model parameters which are estimated from pairwise distinct data sets. For this hypothesis test, we divide the data into k equally-sized parts and follow a k-fold cross-validation procedure. We demonstrate the usefulness of our approach in simulated experiments with a vibrating linear-elastic truss.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1221
Author(s):  
Jocelyn Tapia Stefanoni

This paper extends the canonical small open-economy real-business-cycle model, when considering model uncertainty. Domestic households have multiplier preferences, which leads them to take robust decisions in response to possible model misspecification for the economy’s aggregate productivity. Using perturbation methods, the paper extends the literature on real business cycle models by deriving a closed-form solution for the combined welfare effect of the two sources of uncertainty, namely risk and model uncertainty. While classical risk has an ambiguous effect on welfare, the addition of model uncertainty is unambiguously welfare-deteriorating. Hence, the overall effect of uncertainty on welfare is ambiguous, depending on consumers preferences and model parameters. The paper provides numerical results for the welfare effects of uncertainty measured by units of consumption equivalence. At moderate (high) levels of risk aversion, the effect of risk on household welfare is positive (negative). The addition of model uncertainty—for all levels of concern about model uncertainty and most risk aversion values—turns the overall effect of uncertainty on household welfare negative. It is important to remark that the analytical decomposition and combination of the effects of the two types of uncertainty considered here and the resulting ambiguous effect on overall welfare have not been derived in the previous literature on small open economies.


Author(s):  
Tristan Gally ◽  
Peter Groche ◽  
Florian Hoppe ◽  
Anja Kuttich ◽  
Alexander Matei ◽  
...  

AbstractIn engineering applications almost all processes are described with the help of models. Especially forming machines heavily rely on mathematical models for control and condition monitoring. Inaccuracies during the modeling, manufacturing and assembly of these machines induce model uncertainty which impairs the controller’s performance. In this paper we propose an approach to identify model uncertainty using parameter identification, optimal design of experiments and hypothesis testing. The experimental setup is characterized by optimal sensor positions such that specific model parameters can be determined with minimal variance. This allows for the computation of confidence regions in which the real parameters or the parameter estimates from different test sets have to lie. We claim that inconsistencies in the estimated parameter values, considering their approximated confidence ellipsoids as well, cannot be explained by data uncertainty but are indicators of model uncertainty. The proposed method is demonstrated using a component of the 3D Servo Press, a multi-technology forming machine that combines spindles with eccentric servo drives.


2020 ◽  
Author(s):  
Vadim Vyazmin ◽  
Yuri Bolotin

<p>Airborne gravimetry is capable to provide Earth’s gravity data of high accuracy and spatial resolution for any area of interest, in particular for hard-to-reach areas. An airborne gravimetry measuring system consists of a stable-platform or strapdown gravimeter, and GNSS receivers. In traditional (scalar) airborne gravimetry, the vertical component of the gravity disturbance vector is measured. In actively developing vector gravimetry, all three components of the gravity disturbance vector are measured.</p><p>In this research, we aim at developing new postprocessing algorithms for estimating gravity from airborne data taking into account a priori information about spatial behavior of the gravity field in the survey area. We propose two algorithms for solving the following two problems:</p><p>1) <em>In scalar gravimetry:</em>  Mapping gravity at the flight height using the gravity disturbances estimated along the flight lines (via low-pass or Kalman filtering), taking into account spatial correlation of the gravity field in the survey area and statistical information on the along-line gravity estimate errors.</p><p>2) <em>In vector gravimetry:</em>  Simultaneous determination of three components of the gravity disturbance vector from airborne measurements at the flight path.</p><p>Both developed algorithms use an a priori spatial gravity model based on parameterizing the disturbing potential in the survey area by three-dimensional harmonic spherical scaling functions (SSFs). The algorithm developed for solving Problem 1 provides estimates of the unknown coefficients of the a priori gravity model using a least squares technique. Due to the assumption that the along-line gravity estimate errors at any two lines are not correlated, the algorithm has a recursive (line-by-line) implementation. At the last step of the recursion, regularization is applied due to ill-conditioning of the least squares problem. Numerical results of processing the GT-2A airborne gravimeter data are presented and discussed.</p><p>To solve Problem 2, one need to separate the gravity horizontal component estimates from systematic errors of the inertial navigation system (INS) of a gravimeter (attitude errors, inertial sensor bias). The standard method of gravity estimation based on gravity modelling over time is not capable to provide accurate results, and additional corrections should be applied. The developed algorithm uses a spatial gravity model based on the SSFs. The coefficients of the gravity model and the INS systematic errors are estimated simultaneously from airborne measurements at the flight path via Kalman filtering with regularization at the last time moment. Results of simulation tests show a significant increase in accuracy of gravity vector estimation compared to the standard method.</p><p>This research was supported by RFBR (grant number 19-01-00179).</p>


Sign in / Sign up

Export Citation Format

Share Document