Proposing an Uncertainty Management Framework to Implement the Evidence Theory for Vehicle Crash Applications

Author(s):  
Jonas Siegfried Jehle ◽  
Volker Andreas Lange ◽  
Matthias Gerdts

Abstract The purpose of this work is to enable the use of the Dempster-Shafer evidence theory for uncertainty propagation on computationally expensive automotive crash simulations. This is necessary as the results of these simulations are influenced by multiple possibly uncertain aspects. To avoid negative effects, it is important to detect these factors and their consequences. The challenge when pursuing this effort is the prohibitively high computational cost of the evidence theory. To this end, we present a framework of existing methods that is specifically designed to reduce the necessary number of full model evaluations and parameters. An initial screening removes clearly irrelevant parameters to mitigate the curse of dimensionality. Next, we approximate the full-scale simulation using metamodels to accelerate output generation and thus enable the calculation of global sensitivity indices. These indicate effects of the parameters on the considered output and more profoundly sort out irrelevant parameters. After these steps, the evidence theory can be performed rapidly and feasibly due to fast-responding metamodel and reduced input dimension. It yields bounds for the cumulative distribution function of the considered quantity of interest. We apply the proposed framework to a simplified crash test dummy model. The elementary effects method is used for screening, a kriging metamodel emulates the finite element simulation, and Sobol' sensitivity indices are determined before the evidence theory is applied. The outcome of the framework provide engineers with information about the uncertainties they may face in hardware testing and that should be addressed in future vehicle design.

2016 ◽  
Vol 2016 ◽  
pp. 1-5 ◽  
Author(s):  
Chaoyang Xie ◽  
Guijie Li

Quantification of Margins and Uncertainties (QMU) is a decision-support methodology for complex technical decisions centering on performance thresholds and associated margins for engineering systems. Uncertainty propagation is a key element in QMU process for structure reliability analysis at the presence of both aleatory uncertainty and epistemic uncertainty. In order to reduce the computational cost of Monte Carlo method, a mixed uncertainty propagation approach is proposed by integrated Kriging surrogate model under the framework of evidence theory for QMU analysis in this paper. The approach is demonstrated by a numerical example to show the effectiveness of the mixed uncertainty propagation method.


SIMULATION ◽  
2002 ◽  
Vol 78 (10) ◽  
pp. 587-599 ◽  
Author(s):  
Ali O. Atahan

Computer simulation of vehicle collisions has improved significantly over the past decade. With advances in computer technology, nonlinear finite element codes, and material models, full-scale simulation of such complex dynamic interactions is becoming ever more possible. In this study, an explicit three-dimensional nonlinear finite element code, LS-DYNA, is used to demonstrate the capabilities of computer simulations to supplement full-scale crash testing. After a failed crash test on a strong-post guardrail system, LS-DYNA is used to simulate the system, determine the potential problems with the design, and develop an improved system that has the potential to satisfy current crash test requirements. After accurately simulating the response behavior of the full-scale crash test, a second simulation study is performed on the system with improved details. Simulation results indicate that the system performs much better compared to the original design.


Author(s):  
Alessandra Cuneo ◽  
Alberto Traverso ◽  
Shahrokh Shahpar

In engineering design, uncertainty is inevitable and can cause a significant deviation in the performance of a system. Uncertainty in input parameters can be categorized into two groups: aleatory and epistemic uncertainty. The work presented here is focused on aleatory uncertainty, which can cause natural, unpredictable and uncontrollable variations in performance of the system under study. Such uncertainty can be quantified using statistical methods, but the main obstacle is often the computational cost, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation with as few evaluations as possible. In the last few years, different methodologies for uncertainty propagation and quantification have been proposed. The focus of this study is to evaluate four different methods to demonstrate strengths and weaknesses of each approach. The first method considered is Monte Carlo simulation, a sampling method that can give high accuracy but needs a relatively large computational effort. The second method is Polynomial Chaos, an approximated method where the probabilistic parameters of the response function are modelled with orthogonal polynomials. The third method considered is Mid-range Approximation Method. This approach is based on the assembly of multiple meta-models into one model to perform optimization under uncertainty. The fourth method is the application of the first two methods not directly to the model but to a response surface representing the model of the simulation, to decrease computational cost. All these methods have been applied to a set of analytical test functions and engineering test cases. Relevant aspects of the engineering design and analysis such as high number of stochastic variables and optimised design problem with and without stochastic design parameters were assessed. Polynomial Chaos emerges as the most promising methodology, and was then applied to a turbomachinery test case based on a thermal analysis of a high-pressure turbine disk.


2019 ◽  
Vol 141 (6) ◽  
Author(s):  
M. Giselle Fernández-Godino ◽  
S. Balachandar ◽  
Raphael T. Haftka

When simulations are expensive and multiple realizations are necessary, as is the case in uncertainty propagation, statistical inference, and optimization, surrogate models can achieve accurate predictions at low computational cost. In this paper, we explore options for improving the accuracy of a surrogate if the modeled phenomenon presents symmetries. These symmetries allow us to obtain free information and, therefore, the possibility of more accurate predictions. We present an analytical example along with a physical example that has parametric symmetries. Although imposing parametric symmetries in surrogate models seems to be a trivial matter, there is not a single way to do it and, furthermore, the achieved accuracy might vary. We present four different ways of using symmetry in surrogate models. Three of them are straightforward, but the fourth is original and based on an optimization of the subset of points used. The performance of the options was compared with 100 random designs of experiments (DoEs) where symmetries were not imposed. We found that each of the options to include symmetries performed the best in one or more of the studied cases and, in all cases, the errors obtained imposing symmetries were substantially smaller than the worst cases among the 100. We explore the options for using symmetries in two surrogates that present different challenges and opportunities: Kriging and linear regression. Kriging is often used as a black box; therefore, we consider approaches to include the symmetries without changes in the main code. On the other hand, since linear regression is often built by the user; owing to its simplicity, we consider also approaches that modify the linear regression basis functions to impose the symmetries.


2021 ◽  
Author(s):  
Janis Heuel ◽  
Wolfgang Friederich

<p>Over the last years, installations of wind turbines (WTs) increased worldwide. Owing to<br>negative effects on humans, WTs are often installed in areas with low population density.<br>Because of low anthropogenic noise, these areas are also well suited for sites of<br>seismological stations. As a consequence, WTs are often installed in the same areas as<br>seismological stations. By comparing the noise in recorded data before and after<br>installation of WTs, seismologists noticed a substantial worsening of station quality leading<br>to conflicts between the operators of WTs and earthquake services.</p><p>In this study, we compare different techniques to reduce or eliminate the disturbing signal<br>from WTs at seismological stations. For this purpose, we selected a seismological station<br>that shows a significant correlation between the power spectral density and the hourly<br>windspeed measurements. Usually, spectral filtering is used to suppress noise in seismic<br>data processing. However, this approach is not effective when noise and signal have<br>overlapping frequency bands which is the case for WT noise. As a first method, we applied<br>the continuous wavelet transform (CWT) on our data to obtain a time-scale representation.<br>From this representation, we estimated a noise threshold function (Langston & Mousavi,<br>2019) either from noise before the theoretical P-arrival (pre-noise) or using a noise signal<br>from the past with similar ground velocity conditions at the surrounding WTs. Therefore, we<br>installed low cost seismometers at the surrounding WTs to find similar signals at each WT.<br>From these similar signals, we obtain a noise model at the seismological station, which is<br>used to estimate the threshold function. As a second method, we used a denoising<br>autoencoder (DAE) that learns mapping functions to distinguish between noise and signal<br>(Zhu et al., 2019).</p><p>In our tests, the threshold function performs well when the event is visible in the raw or<br>spectral filtered data, but it fails when WT noise dominates and the event is hidden. In<br>these cases, the DAE removes the WT noise from the data. However, the DAE must be<br>trained with typical noise samples and high signal-to-noise ratio events to distinguish<br>between signal and interfering noise. Using the threshold function and pre-noise can be<br>applied immediately on real-time data and has a low computational cost. Using a noise<br>model from our prerecorded database at the seismological station does not improve the<br>result and it is more time consuming to find similar ground velocity conditions at the<br>surrounding WTs.</p>


Author(s):  
A. Javed ◽  
R. Pecnik ◽  
J. P. van Buijtenen

Compressor impellers for mass-market turbochargers are die-casted and machined with an aim to achieve high dimensional accuracy and acquire specific performance. However, manufacturing uncertainties result in dimensional deviations causing incompatible operational performance and assembly errors. Process capability limitations of the manufacturer can cause an increase in part rejections, resulting in high production cost. This paper presents a study on a centrifugal impeller with focus on the conceptual design phase to obtain a turbomachine that is robust to manufacturing uncertainties. The impeller has been parameterized and evaluated using a commercial computational fluid dynamics (CFD) solver. Considering the computational cost of CFD, a surrogate model has been prepared for the impeller by response surface methodology (RSM) using space-filling Latin hypercube designs. A sensitivity analysis has been performed initially to identify the critical geometric parameters which influence the performance mainly. Sensitivity analysis is followed by the uncertainty propagation and quantification using the surrogate model based Monte Carlo simulation. Finally a robust design optimization has been carried out using a stochastic optimization algorithm leading to a robust impeller design for which the performance is relatively insensitive to variability in geometry without reducing the sources of inherent variation i.e. the manufacturing noise.


2019 ◽  
Vol 285 ◽  
pp. 00022
Author(s):  
Krzysztof Wilde ◽  
Arkadiusz Tilsen ◽  
Stanisław Burzyński ◽  
Wojciech Witkowski

The article describes a comparison of two general methods of occupants safety estimation based on a numerical examples. The so-called direct method is mainly based on the HIC (Head Injury Criterion) of a crash test dummy in a vehicle with passive safety system while the indirect method uses a European standard approach to estimate impact severity level.


2015 ◽  
Vol 19 (7) ◽  
pp. 3273-3286 ◽  
Author(s):  
C. Lavaysse ◽  
J. Vogt ◽  
F. Pappenberger

Abstract. Timely forecasts of the onset or possible evolution of droughts are an important contribution to mitigate their manifold negative effects. In this paper we therefore analyse and compare the performance of the first month of the probabilistic extended range forecast and of the seasonal forecast from the European Centre for Medium-range Weather Forecasts (ECMWF) in predicting droughts over the European continent. The Standardized Precipitation Index (SPI-1) is used to quantify the onset or likely evolution of ongoing droughts for the next month. It can be shown that on average the extended range forecast has greater skill than the seasonal forecast, whilst both outperform climatology. No significant spatial or temporal patterns can be observed, but the scores are improved when focussing on large-scale droughts. In a second step we then analyse several different methods to convert the probabilistic forecasts of SPI into a Boolean drought warning. It can be demonstrated that methodologies which convert low percentiles of the forecasted SPI cumulative distribution function into warnings are superior in comparison with alternatives such as the mean or the median of the ensemble. The paper demonstrates that up to 40 % of droughts are correctly forecasted one month in advance. Nevertheless, during false alarms or misses, we did not find significant differences in the distribution of the ensemble members that would allow for a quantitative assessment of the uncertainty.


Author(s):  
Sedat Ozcanan ◽  
Ali Osman Atahan

For guardrail designers, it is essential to achieve a crashworthy and optimal system design. One of the most critical parameters for an optimal road restraint system is the post embedment depth or the post-to-soil interaction. This study aims to assess the optimum post embedment depth values of three different guardrail posts embedded in soil with varying density. Posts were subjected to dynamic impact loads in the field while a detailed finite element study was performed to construct accurate models for the post–soil interaction. It is well-known that experimental tests and simulations are costly and time-consuming. Therefore, to reduce the computational cost of optimization, radial basis function–based metamodeling methodology was employed to create surrogate models that were used to replace the expensive three-dimensional finite element models. In order to establish the radial basis function model, samples were derived using the full factorial design. Afterward, radial basis function–based metamodels were generated from the derived data and objective functions performed using finite element analysis. The accuracy of the metamodels were validated by k-fold cross-validation, then optimized using multi-objective genetic algorithm. After optimum embedment depths were obtained, finite element simulations of the results were compared with full-scale crash test results. In comparison with the actual post embedment depths, optimal post embedment depths provided significant economic advantages without compromising safety and crashworthiness. It is concluded that the optimum post embedment depths provide an economic advantage of up to 17.89%, 36.75%, and 43.09% for C, S, and H types of post, respectively, when compared to actual post embedment depths.


Sign in / Sign up

Export Citation Format

Share Document