scholarly journals A Sequential Approach to Numerical Simulations of Solidification with Domain and Time Decomposition

2019 ◽  
Vol 9 (10) ◽  
pp. 1972 ◽  
Author(s):  
Elzbieta Gawronska

Progress in computational methods has been stimulated by the widespread availability of cheap computational power leading to the improved precision and efficiency of simulation software. Simulation tools become indispensable tools for engineers who are interested in attacking increasingly larger problems or are interested in searching larger phase space of process and system variables to find the optimal design. In this paper, we show and introduce a new approach to a computational method that involves mixed time stepping scheme and allows to decrease computational cost. Implementation of our algorithm does not require a parallel computing environment. Our strategy splits domains of a dynamically changing physical phenomena and allows to adjust the numerical model to various sub-domains. We are the first (to our best knowledge) to show that it is possible to use a mixed time partitioning method with various combination of schemes during binary alloys solidification. In particular, we use a fixed time step in one domain, and look for much larger time steps in other domains, while maintaining high accuracy. Our method is independent of a number of domains considered, comparing to traditional methods where only two domains were considered. Mixed time partitioning methods are of high importance here, because of natural separation of domain types. Typically all important physical phenomena occur in the casting and are of high computational cost, while in the mold domains less dynamic processes are observed and consequently larger time step can be chosen. Finally, we performed series of numerical experiments and demonstrate that our approach allows reducing computational time by more than three times without losing the significant precision of results and without parallel computing.

2002 ◽  
Vol 128 (3) ◽  
pp. 506-517 ◽  
Author(s):  
S. M. Camporeale ◽  
B. Fortunato ◽  
M. Mastrovito

A high-fidelity real-time simulation code based on a lumped, nonlinear representation of gas turbine components is presented. The code is a general-purpose simulation software environment useful for setting up and testing control equipments. The mathematical model and the numerical procedure are specially developed in order to efficiently solve the set of algebraic and ordinary differential equations that describe the dynamic behavior of gas turbine engines. For high-fidelity purposes, the mathematical model takes into account the actual composition of the working gases and the variation of the specific heats with the temperature, including a stage-by-stage model of the air-cooled expansion. The paper presents the model and the adopted solver procedure. The code, developed in Matlab-Simulink using an object-oriented approach, is flexible and can be easily adapted to any kind of plant configuration. Simulation tests of the transients after load rejection have been carried out for a single-shaft heavy-duty gas turbine and a double-shaft aero-derivative industrial engine. Time plots of the main variables that describe the gas turbine dynamic behavior are shown and the results regarding the computational time per time step are discussed.


Author(s):  
Marco Baldan ◽  
Alexander Nikanorov ◽  
Bernard Nacke

Purpose Reliable modeling of induction hardening requires a multi-physical approach, which makes it time-consuming. In designing an induction hardening system, combining such model with an optimization technique allows managing a high number of design variables. However, this could lead to a tremendous overall computational cost. This paper aims to reduce the computational time of an optimal design problem by making use of multi-fidelity modeling and parallel computing. Design/methodology/approach In the multi-fidelity framework, the “high-fidelity” model couples the electromagnetic, thermal and metallurgical fields. It predicts the phase transformations during both the heating and cooling stages. The “low-fidelity” model is instead limited to the heating step. Its inaccuracy is counterbalanced by its cheapness, which makes it suitable for exploring the design space in optimization. Then, the use of co-Kriging allows merging information from different fidelity models and predicting good design candidates. Field evaluations of both models occur in parallel. Findings In the design of an induction heating system, the synergy between the “high-fidelity” and “low-fidelity” model, together with use of surrogates and parallel computing could reduce up to one order of magnitude the overall computational cost. Practical implications On one hand, multi-physical modeling of induction hardening implies a better understanding of the process, resulting in further potential process improvements. On the other hand, the optimization technique could be applied to many other computationally intensive real-life problems. Originality/value This paper highlights how parallel multi-fidelity optimization could be used in designing an induction hardening system.


Author(s):  
Andrew M. Feldick ◽  
Gopalendu Pal

Abstract The introduction of higher fidelity spectral models into a Discrete Ordinates Method (DOM) RTE solver introduces the challenge of solving the N(N+2) coupled equations in intensity over many spectral points. The inability to store intensity fields leads to a nonlinear increase in computational cost as compared to basic gray models, as the solution in an evolving field must be recalculated at each radiation time step. In this paper an approximate initialization approach is used to a reconstructed values of the intensities. This approach is particularly well suited to spectrally reordered methods, as the boundary conditions and scattering coefficients are gray. This approach leads to more tractable computational time, and is demonstrated using on two industrial scale flames.


2004 ◽  
Vol 2004 (2) ◽  
pp. 307-314 ◽  
Author(s):  
Phailaung Phohomsiri ◽  
Firdaus E. Udwadia

A simple accelerated third-order Runge-Kutta-type, fixed time step, integration scheme that uses just two function evaluations per step is developed. Because of the lower number of function evaluations, the scheme proposed herein has a lower computational cost than the standard third-order Runge-Kutta scheme while maintaining the same order of local accuracy. Numerical examples illustrating the computational efficiency and accuracy are presented and the actual speedup when the accelerated algorithm is implemented is also provided.


2011 ◽  
Vol 23 (7) ◽  
pp. 1704-1742 ◽  
Author(s):  
Jonathan Touboul

Bidimensional spiking models are garnering a lot of attention for their simplicity and their ability to reproduce various spiking patterns of cortical neurons and are used particularly for large network simulations. These models describe the dynamics of the membrane potential by a nonlinear differential equation that blows up in finite time, coupled to a second equation for adaptation. Spikes are emitted when the membrane potential blows up or reaches a cutoff θ. The precise simulation of the spike times and of the adaptation variable is critical, for it governs the spike pattern produced and is hard to compute accurately because of the exploding nature of the system at the spike times. We thoroughly study the precision of fixed time-step integration schemes for this type of model and demonstrate that these methods produce systematic errors that are unbounded, as the cutoff value is increased, in the evaluation of the two crucial quantities: the spike time and the value of the adaptation variable at this time. Precise evaluation of these quantities therefore involves very small time steps and long simulation times. In order to achieve a fixed absolute precision in a reasonable computational time, we propose here a new algorithm to simulate these systems based on a variable integration step method that either integrates the original ordinary differential equation or the equation of the orbits in the phase plane, and compare this algorithm with fixed time-step Euler scheme and other more accurate simulation algorithms.


2016 ◽  
Author(s):  
Kristofer Döös ◽  
Bror Jönsson ◽  
Joakim Kjellsson

Abstract. Two different trajectory schemes for oceanic and atmospheric general circulation models are compared in two different experiments. The theories of the two trajectory schemes are presented showing the differential equations they solve and why they are mass conserving. One scheme assumes that the velocity fields are stationary for a limited period of time and solves the trajectory path from a differential equation only as a function of space, i.e. "stepwise stationary". The second scheme uses a continuous linear interpolation of the fields in time and solves the trajectory path from a differential equation as a function of both space and time, i.e. "time-dependent". A special case of the "stepwise-stationary" scheme, when velocities are assumed constant between GCM outputs, is also considered, named "fixed GCM time step". The trajectory schemes are tested "off-line", i.e. using the already integrated and stored velocity fields from a GCM. The first comparison of the schemes uses trajectories calculated using the velocity fields from an eddy-resolving ocean general circulation model in the Agulhas region. The second comparison uses trajectories calculated using the wind fields from an atmospheric reanalysis. The study shows that using the "time-dependent" scheme over the "stepwise-stationary" scheme greatly improves accuracy with only a small increase in computational time. It is also found that with decreasing time steps the "stepwise-stationary" scheme becomes more accurate but at increased computational cost. The "time-dependent" scheme is therefore preferred over the "stepwise-stationary" scheme. However, when averaging over large ensembles of trajectories the two schemes are comparable, as intrinsic variability dominates over numerical errors. The "fixed GCM time step" is found to be less accurate than the "stepwise-stationary" scheme, even when considering averages over large ensembles.


2014 ◽  
Vol 7 (5) ◽  
pp. 1961-1977 ◽  
Author(s):  
H. Wan ◽  
P. J. Rasch ◽  
K. Zhang ◽  
Y. Qian ◽  
H. Yan ◽  
...  

Abstract. This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of high-resolution, costly, and complex climate models.


Author(s):  
S. M. Hosseini Zahraei ◽  
A. E. P. Veldman ◽  
P. R. Wellens ◽  
I. Akkerman ◽  
R. H. M. Huijsmans

The physical level of interaction between fluid and structure can be either one-way or two-way depending on the direction of information exchange at the interface of fluid and solid. The former can be solved by a partitioned approach and weak coupling. In problems involving two-way fluid-structure interaction, using a partitioned approach and strong coupling, sometimes stability restriction is encountered. This is an artificial added mass effect, which is independent of the numerical time step. Unfortunately an accurate and efficient method to deal with all the different levels of interaction is scarce. Conventionally, relaxation is applied to remedy this problem. The computational cost is directly related to number of sub-iterations between fluid and structural solver at each time step. In this study, the source of this instability is investigated. A discrete representation of a basic added mass operator is given and instability conditions are assessed. A new method is proposed to relax this restriction, the idea essentially is to remove the instability source from the structure and move it to the fluid and solve it monolithically with the fluid. We call this an interaction law. An estimate of the structural response is derived from structural mode shapes. As a test case, a 2D dam break problem interacting with an elastic vertical flexible beam is selected. The interaction of fluid with the beam undergoes several stages. The breaking waves on the beam can increase the added mass drastically, therefore the added mass ratio increases as well. In such a cases, the asset of interaction law is better elaborated, while the stability condition requires very high relaxation without interaction law, but the relaxation can be lowered by only using first five beam mode shapes. As a consequence, the number of sub-iterations reduces by one order. The numerical observations confirm the reduction in computational time due to utilization of the interaction law.


2017 ◽  
Vol 10 (4) ◽  
pp. 1733-1749 ◽  
Author(s):  
Kristofer Döös ◽  
Bror Jönsson ◽  
Joakim Kjellsson

Abstract. Three different trajectory schemes for oceanic and atmospheric general circulation models are compared in two different experiments. The theories of the trajectory schemes are presented showing the differential equations they solve and why they are mass conserving. One scheme assumes that the velocity fields are stationary for set intervals of time between saved model outputs and solves the trajectory path from a differential equation only as a function of space, i.e. stepwise stationary. The second scheme is a special case of the stepwise-stationary scheme, where velocities are assumed constant between general circulation model (GCM) outputs; it uses hence a fixed GCM time step. The third scheme uses a continuous linear interpolation of the fields in time and solves the trajectory path from a differential equation as a function of both space and time, i.e. a time-dependent scheme. The trajectory schemes are tested offline, i.e. using the already integrated and stored velocity fields from a GCM. The first comparison of the schemes uses trajectories calculated using the velocity fields from a high-resolution ocean general circulation model in the Agulhas region. The second comparison uses trajectories calculated using the wind fields from an atmospheric reanalysis. The study shows that using the time-dependent scheme over the stepwise-stationary scheme greatly improves accuracy with only a small increase in computational time. It is also found that with decreasing time steps the stepwise-stationary scheme becomes increasingly more accurate but at increased computational cost. The time-dependent scheme is therefore preferred over the stepwise-stationary scheme. However, when averaging over large ensembles of trajectories, the two schemes are comparable, as intrinsic variability dominates over numerical errors. The fixed GCM time step scheme is found to be less accurate than the stepwise-stationary scheme, even when considering averages over large ensembles.


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document