Uncertainty Quantification Analysis On Silicon Electrodeposition Process via Numerical Simulation Methods

Author(s):  
Zhuoyuan Zheng ◽  
Pingfeng Wang

Abstract Silicon is one of the commonly used semiconductors for various industrial applications. Traditional silicon synthesis methods are often expansive and cannot meet the continuously growing demands for high-purity Si; electrodeposition is a promising and simple alternative. However, the electrodeposited products often possess nonuniform thicknesses due to various sources of uncertainty inherited from the fabrication process; to improve the quality of the coating products, it is crucial to better understand the influences of the sources of uncertainty. In this paper, uncertainty quantification (UQ) analysis is performed on the silicon electrodeposition process to evaluate the impacts of various experimental operation parameters on the thickness variation of the coated silicon layer and to find the optimal experimental conditions. To mitigate the high experimental and computational cost issues, a Gaussian Process (GP) based surrogate model is constructed to conduct the UQ study with finite element (FE) simulation results as training data. It is found that the GP surrogate model can efficiently and accurately estimate the performance of the electrodeposition given certain experimental operation parameters. The results show that the electrodeposition process is sensitive to the geometric settings of the experiments, i.e. distance and area ratio between the counter and working electrodes; whereas other conditions, such as potential of the counter electrode, temperature and ion concentration in the electrolyte bath, are less important. Furthermore, the optimal operating condition to deposit silicon is proposed to minimize the thickness variation of the coated silicon layer and to enhance the reliability of the electrodeposition experiment.

Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 87
Author(s):  
Yongqiang Wang ◽  
Ye Liu ◽  
Xiaoyi Ma

The numerical simulation of the optimal design of gravity dams is computationally expensive. Therefore, a new optimization procedure is presented in this study to reduce the computational cost for determining the optimal shape of a gravity dam. Optimization was performed using a combination of the genetic algorithm (GA) and an updated Kriging surrogate model (UKSM). First, a Kriging surrogate model (KSM) was constructed with a small sample set. Second, the minimizing the predictor strategy was used to add samples in the region of interest to update the KSM in each updating cycle until the optimization process converged. Third, an existing gravity dam was used to demonstrate the effectiveness of the GA–UKSM. The solution obtained with the GA–UKSM was compared with that obtained using the GA–KSM. The results revealed that the GA–UKSM required only 7.53% of the total number of numerical simulations required by the GA–KSM to achieve similar optimization results. Thus, the GA–UKSM can significantly improve the computational efficiency. The method adopted in this study can be used as a reference for the optimization of the design of gravity dams.


Author(s):  
Sajjad Yousefian ◽  
Gilles Bourque ◽  
Rory F. D. Monaghan

There is a need for fast and reliable emissions prediction tools in the design, development and performance analysis of gas turbine combustion systems to predict emissions such as NOx, CO. Hybrid emissions prediction tools are defined as modelling approaches that (1) use computational fluid dynamics (CFD) or component modelling methods to generate flow field information, and (2) integrate them with detailed chemical kinetic modelling of emissions using chemical reactor network (CRN) techniques. This paper presents a review and comparison of hybrid emissions prediction tools and uncertainty quantification (UQ) methods for gas turbine combustion systems. In the first part of this study, CRN solvers are compared on the bases of some selected attributes which facilitate flexibility of network modelling, implementation of large chemical kinetic mechanisms and automatic construction of CRN. The second part of this study deals with UQ, which is becoming an important aspect of the development and use of computational tools in gas turbine combustion chamber design and analysis. Therefore, the use of UQ technique as part of the generalized modelling approach is important to develop a UQ-enabled hybrid emissions prediction tool. UQ techniques are compared on the bases of the number of evaluations and corresponding computational cost to achieve desired accuracy levels and their ability to treat deterministic models for emissions prediction as black boxes that do not require modifications. Recommendations for the development of UQ-enabled emissions prediction tools are made.


Author(s):  
A. Javed ◽  
R. Pecnik ◽  
J. P. van Buijtenen

Compressor impellers for mass-market turbochargers are die-casted and machined with an aim to achieve high dimensional accuracy and acquire specific performance. However, manufacturing uncertainties result in dimensional deviations causing incompatible operational performance and assembly errors. Process capability limitations of the manufacturer can cause an increase in part rejections, resulting in high production cost. This paper presents a study on a centrifugal impeller with focus on the conceptual design phase to obtain a turbomachine that is robust to manufacturing uncertainties. The impeller has been parameterized and evaluated using a commercial computational fluid dynamics (CFD) solver. Considering the computational cost of CFD, a surrogate model has been prepared for the impeller by response surface methodology (RSM) using space-filling Latin hypercube designs. A sensitivity analysis has been performed initially to identify the critical geometric parameters which influence the performance mainly. Sensitivity analysis is followed by the uncertainty propagation and quantification using the surrogate model based Monte Carlo simulation. Finally a robust design optimization has been carried out using a stochastic optimization algorithm leading to a robust impeller design for which the performance is relatively insensitive to variability in geometry without reducing the sources of inherent variation i.e. the manufacturing noise.


2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


Author(s):  
Shuai Guo ◽  
Camilo F. Silva ◽  
Wolfgang Polifke

Abstract One of the fundamental tasks in performing robust thermoacoustic design of gas turbine combustors is calculating the modal instability risk, i.e., the probability that a thermoacoustic mode is unstable, given various sources of uncertainty (e.g., operation or boundary conditions). To alleviate the high computational cost associated with conventional Monte Carlo simulation, surrogate modeling techniques are usually employed. Unfortunately, in practice it is not uncommon that only a small number of training samples can be afforded for surrogate model training. As a result, epistemic uncertainty may be introduced by such an “inaccurate” model, provoking a variation of modal instability risk calculation. In the current study, using Gaussian Process (GP) as the surrogate model, we address the following two questions: Firstly, how to quantify the variation of modal instability risk induced by the epistemic surrogate model uncertainty? Secondly, how to reduce the variation of risk calculation given a limited computational budget for the surrogate model training? For the first question, we leverage on the Bayesian characteristic of the GP model and perform correlated sampling of the GP predictions at different inputs to quantify the uncertainty of risk calculation. We show how this uncertainty shrinks when more training samples are available. For the second question, we adopt an active learning strategy to intelligently allocate training samples, such that the trained GP model is highly accurate particularly in the vicinity of the zero growth rate contour. As a result, a more accurate and robust modal instability risk calculation is obtained without increasing the computational cost of surrogate model training.


2019 ◽  
Vol 9 (16) ◽  
pp. 3343 ◽  
Author(s):  
Jiajia Shi ◽  
Liu Chu ◽  
Eduardo Souza de Cursi

The utilization of modal frequency sensors is a feasible and effective way to monitor the settlement problem of the transmission tower foundation. However, the uncertainties and interference in the real operation environment of transmission towers highly affect the accuracy and identification of modal frequency sensors. In order to reduce the interference of modal frequency sensors for transmission towers, a Kriging surrogate model is proposed in this study. The finite element model of typical transmission towers is created and validated to provide the effective original database for the Kriging surrogate model. The prediction accuracy and convergences of the Kriging surrogate model are measured and confirmed. Besides the merits in computational cost and high-efficiency, the Kriging surrogate model is proven to have a satisfied and robust interference reduction capacity. Therefore, the Kriging surrogate model is feasible and competitive for interference filtration in the settlement surveillance sensors of steel transmission towers.


Author(s):  
Zhen Hu ◽  
Sankaran Mahadevan ◽  
Xiaoping Du

Limited data of stochastic load processes and system random variables result in uncertainty in the results of time-dependent reliability analysis. An uncertainty quantification (UQ) framework is developed in this paper for time-dependent reliability analysis in the presence of data uncertainty. The Bayesian approach is employed to model the epistemic uncertainty sources in random variables and stochastic processes. A straightforward formulation of UQ in time-dependent reliability analysis results in a double-loop implementation procedure, which is computationally expensive. This paper proposes an efficient method for the UQ of time-dependent reliability analysis by integrating the fast integration method and surrogate model method with time-dependent reliability analysis. A surrogate model is built first for the time-instantaneous conditional reliability index as a function of variables with imprecise parameters. For different realizations of the epistemic uncertainty, the associated time-instantaneous most probable points (MPPs) are then identified using the fast integration method based on the conditional reliability index surrogate without evaluating the original limit-state function. With the obtained time-instantaneous MPPs, uncertainty in the time-dependent reliability analysis is quantified. The effectiveness of the proposed method is demonstrated using a mathematical example and an engineering application example.


Sign in / Sign up

Export Citation Format

Share Document