Efficient Possibilistic Uncertainty Analysis of a Car Crash Scenario Using a Multifidelity Approach

Author(s):  
Markus Mäck ◽  
Michael Hanss

Abstract The early design stage of mechanical structures is often characterized by unknown or only partially known boundary conditions and environmental influences. Particularly, in the case of safety-relevant components, such as the crumple zone structure of a car, those uncertainties must be appropriately quantified and accounted for in the design process. For this purpose, possibility theory provides a suitable tool for the modeling of incomplete information and uncertainty propagation. However, the numerical propagation of uncertainty described by possibility theory is accompanied by high computational costs. The necessarily repeated model evaluations render the uncertainty analysis challenging to be realized if a model is complex and of large scale. Oftentimes, simplified and idealized models are used for the uncertainty analysis to speed up the simulation while accepting a loss of accuracy. The proposed multifidelity scheme for possibilistic uncertainty analysis, instead, takes advantage of the low costs of an inaccurate low-fidelity model and the accuracy of an expensive high-fidelity model. For this purpose, the functional dependency between the high- and low-fidelity model is exploited and captured in a possibilistic way. This results in a significant speedup for the uncertainty analysis while ensuring accuracy by using only a low number of expensive high-fidelity model evaluations. The proposed approach is applied to an automotive car crash scenario in order to emphasize its versatility and applicability.

2021 ◽  
Vol 1 ◽  
pp. 3229-3238
Author(s):  
Torben Beernaert ◽  
Pascal Etman ◽  
Maarten De Bock ◽  
Ivo Classen ◽  
Marco De Baar

AbstractThe design of ITER, a large-scale nuclear fusion reactor, is intertwined with profound research and development efforts. Tough problems call for novel solutions, but the low maturity of those solutions can lead to unexpected problems. If designers keep solving such emergent problems in iterative design cycles, the complexity of the resulting design is bound to increase. Instead, we want to show designers the sources of emergent design problems, so they may be dealt with more effectively. We propose to model the interplay between multiple problems and solutions in a problem network. Each problem and solution is then connected to a dynamically changing engineering model, a graph of physical components. By analysing the problem network and the engineering model, we can (1) derive which problem has emerged from which solution and (2) compute the contribution of each design effort to the complexity of the evolving engineering model. The method is demonstrated for a sequence of problems and solutions that characterized the early design stage of an optical subsystem of ITER.


2021 ◽  
Author(s):  
Oluvaseun Owojaiye

Advancement in technology has brought considerable improvement to processor design and now manufacturers design multiple processors on a single chip. Supercomputers today consists of cluster of interconnected nodes that collaborate together to solve complex and advanced computation problems. Message Passing Interface and Open Multiprocessing are the popularly used programming models to optimize sequential codes by parallelizing them on the different multiprocessor architecture that exist today. In this thesis, we parallelize the non-slicing floorplan algorithm based on Multilevel Floorplanning/placement of large scale modules using B*tree (MB*tree) with MPI and OpenMP on distributed and shared memory architectures respectively. In VLSI (Very Large Scale Integration) design automation, floorplanning is an initial and vital task performed in the early design stage. Experimental results using MCNC benchmark circuits show that our parallel algorithm produced better results than the corresponding sequential algorithm; we were able to speed up the algorithm up to 4 times, hence reducing computation time and maintaining floorplan solution quality. On the other hand, we compared both parallel versions; and the OpenMP results gave slightly better than the corresponding MPI results.


2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Nathaniel B. Price ◽  
Nam-Ho Kim ◽  
Raphael T. Haftka ◽  
Mathieu Balesdent ◽  
Sébastien Defoort ◽  
...  

Early in the design process, there is often mixed epistemic model uncertainty and aleatory parameter uncertainty. Later in the design process, the results of high-fidelity simulations or experiments will reduce epistemic model uncertainty and may trigger a redesign process. Redesign is undesirable because it is associated with costs and delays; however, it is also an opportunity to correct a dangerous design or possibly improve design performance. In this study, we propose a margin-based design/redesign method where the design is optimized deterministically, but the margins are selected probabilistically. The final design is an epistemic random variable (i.e., it is unknown at the initial design stage) and the margins are optimized to control the epistemic uncertainty in the final design, design performance, and probability of failure. The method allows for the tradeoff between expected final design performance and probability of redesign while ensuring reliability with respect to mixed uncertainties. The method is demonstrated on a simple bar problem and then on an engine design problem. The examples are used to investigate the dilemma of whether to start with a higher margin and redesign if the test later in the design process reveals the design to be too conservative, or to start with a lower margin and redesign if the test reveals the design to be unsafe. In the examples in this study, it is found that this decision is related to the variance of the uncertainty in the high-fidelity model relative to the variance of the uncertainty in the low-fidelity model.


Author(s):  
Jungmok Ma ◽  
Harrison M. Kim

Product and design analytics is emerging as a promising area for the analysis of large-scale data and reflection of the extracted knowledge for the design of optimal system. The Continuous Preference Trend Mining (CPTM) algorithm and a framework that are proposed in this study address some fundamental challenges in the context of product and design analytics. The first contribution is the development of a new predictive trend mining technique that captures a hidden trend of customer purchase patterns from large accumulated transactional data. Different from traditional, static data mining algorithms, the CPTM does not assume the stationarity, and dynamically extract valuable knowledge of customers over time. By generating trend embedded future data, the CPTM algorithm not only shows higher prediction accuracy in comparison with static models, but also provide essential properties that could not be achieved with a previous proposed model: avoiding an over-fitting problem, identifying performance information of constructed model, and allowing a numeric prediction. The second contribution is a predictive design methodology in the early design stage. The framework enables engineering designers to optimize product design over multiple life cycles while reflecting customer preferences and technological obsolescence using the CPTM algorithm. For illustration, the developed framework is applied to an example of tablet PC design in leasing market and the result shows that the selection of optimal design is achieved over multiple life cycles.


2021 ◽  
Author(s):  
Oluvaseun Owojaiye

Advancement in technology has brought considerable improvement to processor design and now manufacturers design multiple processors on a single chip. Supercomputers today consists of cluster of interconnected nodes that collaborate together to solve complex and advanced computation problems. Message Passing Interface and Open Multiprocessing are the popularly used programming models to optimize sequential codes by parallelizing them on the different multiprocessor architecture that exist today. In this thesis, we parallelize the non-slicing floorplan algorithm based on Multilevel Floorplanning/placement of large scale modules using B*tree (MB*tree) with MPI and OpenMP on distributed and shared memory architectures respectively. In VLSI (Very Large Scale Integration) design automation, floorplanning is an initial and vital task performed in the early design stage. Experimental results using MCNC benchmark circuits show that our parallel algorithm produced better results than the corresponding sequential algorithm; we were able to speed up the algorithm up to 4 times, hence reducing computation time and maintaining floorplan solution quality. On the other hand, we compared both parallel versions; and the OpenMP results gave slightly better than the corresponding MPI results.


2015 ◽  
Vol 51 ◽  
pp. 1534-1544 ◽  
Author(s):  
Ryoichiro Agata ◽  
Tsuyoshi Ichimura ◽  
Kazuro Hirahara ◽  
Mamoru Hyodo ◽  
Takane Hori ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document