The Role of a Structural Mode Shape Based Interaction Law to Suppress Added Mass Instabilities in Partitioned Strongly Coupled Elastic Structure-Fluid Systems

Author(s):  
S. M. Hosseini Zahraei ◽  
A. E. P. Veldman ◽  
P. R. Wellens ◽  
I. Akkerman ◽  
R. H. M. Huijsmans

The physical level of interaction between fluid and structure can be either one-way or two-way depending on the direction of information exchange at the interface of fluid and solid. The former can be solved by a partitioned approach and weak coupling. In problems involving two-way fluid-structure interaction, using a partitioned approach and strong coupling, sometimes stability restriction is encountered. This is an artificial added mass effect, which is independent of the numerical time step. Unfortunately an accurate and efficient method to deal with all the different levels of interaction is scarce. Conventionally, relaxation is applied to remedy this problem. The computational cost is directly related to number of sub-iterations between fluid and structural solver at each time step. In this study, the source of this instability is investigated. A discrete representation of a basic added mass operator is given and instability conditions are assessed. A new method is proposed to relax this restriction, the idea essentially is to remove the instability source from the structure and move it to the fluid and solve it monolithically with the fluid. We call this an interaction law. An estimate of the structural response is derived from structural mode shapes. As a test case, a 2D dam break problem interacting with an elastic vertical flexible beam is selected. The interaction of fluid with the beam undergoes several stages. The breaking waves on the beam can increase the added mass drastically, therefore the added mass ratio increases as well. In such a cases, the asset of interaction law is better elaborated, while the stability condition requires very high relaxation without interaction law, but the relaxation can be lowered by only using first five beam mode shapes. As a consequence, the number of sub-iterations reduces by one order. The numerical observations confirm the reduction in computational time due to utilization of the interaction law.

2019 ◽  
Vol 9 (10) ◽  
pp. 1972 ◽  
Author(s):  
Elzbieta Gawronska

Progress in computational methods has been stimulated by the widespread availability of cheap computational power leading to the improved precision and efficiency of simulation software. Simulation tools become indispensable tools for engineers who are interested in attacking increasingly larger problems or are interested in searching larger phase space of process and system variables to find the optimal design. In this paper, we show and introduce a new approach to a computational method that involves mixed time stepping scheme and allows to decrease computational cost. Implementation of our algorithm does not require a parallel computing environment. Our strategy splits domains of a dynamically changing physical phenomena and allows to adjust the numerical model to various sub-domains. We are the first (to our best knowledge) to show that it is possible to use a mixed time partitioning method with various combination of schemes during binary alloys solidification. In particular, we use a fixed time step in one domain, and look for much larger time steps in other domains, while maintaining high accuracy. Our method is independent of a number of domains considered, comparing to traditional methods where only two domains were considered. Mixed time partitioning methods are of high importance here, because of natural separation of domain types. Typically all important physical phenomena occur in the casting and are of high computational cost, while in the mold domains less dynamic processes are observed and consequently larger time step can be chosen. Finally, we performed series of numerical experiments and demonstrate that our approach allows reducing computational time by more than three times without losing the significant precision of results and without parallel computing.


Author(s):  
Andrew M. Feldick ◽  
Gopalendu Pal

Abstract The introduction of higher fidelity spectral models into a Discrete Ordinates Method (DOM) RTE solver introduces the challenge of solving the N(N+2) coupled equations in intensity over many spectral points. The inability to store intensity fields leads to a nonlinear increase in computational cost as compared to basic gray models, as the solution in an evolving field must be recalculated at each radiation time step. In this paper an approximate initialization approach is used to a reconstructed values of the intensities. This approach is particularly well suited to spectrally reordered methods, as the boundary conditions and scattering coefficients are gray. This approach leads to more tractable computational time, and is demonstrated using on two industrial scale flames.


Author(s):  
Injae Lee ◽  
Haecheon Choi

In the present study, a new immersed boundary method for the simulation of flow around an elastic slender body is suggested. The present method is based on the discrete-forcing immersed boundary method by Kim et al. (J. Comput. Phys., 2001) and is fully coupled with the elastic slender body motion. The incompressible Navier-Stokes equations are solved in an Eulerian coordinate and the elastic slender body motion is described in a Lagrangian coordinate, respectively. The elastic slender body is modeled as a thin flexible beam and is segmented by finite number of blocks. Each block is then moved by the external and internal forces such as the hydrodynamic, tension, bending, and buoyancy forces. With the proposed method, we simulate several flow problems including flows over a flexible filament, an oscillating insect wing, and a flapping flag. We show that the present method does not impose any severe limitation on the size of computational time step. The results obtained agree very well with those from previous studies.


Author(s):  
Mohammad I. Hatamleh ◽  
Jagannathan Mahadevan ◽  
Arif Malik ◽  
Dong Qian

The single explicit analysis using time-dependent damping (SEATD) technique for laser shock peening (LSP) simulation employs variable damping to relax the excited model between laser shots, thus distinguishing it from conventional optimum constant damping methods. Dynamic relaxation (DR) is the well-established conventional technique that mathematically identifies the optimum constant damping coefficient and incremental time-step that guarantees stability and convergence while damping all mode shapes uniformly when bringing a model to quasi-static equilibrium. Examined in this research is a new systematic procedure to strive for a more effective, time-dependent variable damping profile for general LSP configurations and boundary conditions, based on excited modal parameters of a given laser-shocked system. The effects of increasing the number of mode shapes and selecting modes by contributed effective masses are studied, and a procedure to identify the most efficient variable damping profile is designed. Two different simulation cases are studied. It is found that the computational time is reduced by up to 25% (62.5 min) for just five laser shots using the presented variable damping method versus conventional optimum constant damping. Since LSP typically involved hundreds of shots, the accumulated savings in computation time during prediction of desired process parameters is significant.


2018 ◽  
Vol 141 (2) ◽  
Author(s):  
Luis E. Monterrubio ◽  
Petr Krysl

This work presents an efficient way to calculate the added mass matrix, which allows solving for natural frequencies and modes of solids vibrating in an inviscid and infinite fluid. The finite element method (FEM) is used to compute the vibration spectrum of a dry structure, then the boundary element method (BEM) is applied to compute the pressure modes needed to determine the added mass matrix that represents the fluid. The BEM requires numerical integration which results in a large computational cost. In this work, a reduction of the computational cost was achieved by computing the values of the pressure modes with the required numerical integration using a coarse BEM mesh, and then, interpolation was used to compute the pressure modes at the nodes of a fine FEM mesh. The added mass matrix was then computed and added to the original mass matrix of the generalized eigenvalue problem to determine the wetted natural frequencies. Computational cost was minimized using a reduced eigenvalue problem of size equal to the requested number of natural frequencies. The results show that the error of the natural frequencies using the procedure in this work is between 2% and 5% with 87% reduction of the computational time. The motivation of this work is to study the vibration of marine mammals' ear bones.


2016 ◽  
Author(s):  
Kristofer Döös ◽  
Bror Jönsson ◽  
Joakim Kjellsson

Abstract. Two different trajectory schemes for oceanic and atmospheric general circulation models are compared in two different experiments. The theories of the two trajectory schemes are presented showing the differential equations they solve and why they are mass conserving. One scheme assumes that the velocity fields are stationary for a limited period of time and solves the trajectory path from a differential equation only as a function of space, i.e. "stepwise stationary". The second scheme uses a continuous linear interpolation of the fields in time and solves the trajectory path from a differential equation as a function of both space and time, i.e. "time-dependent". A special case of the "stepwise-stationary" scheme, when velocities are assumed constant between GCM outputs, is also considered, named "fixed GCM time step". The trajectory schemes are tested "off-line", i.e. using the already integrated and stored velocity fields from a GCM. The first comparison of the schemes uses trajectories calculated using the velocity fields from an eddy-resolving ocean general circulation model in the Agulhas region. The second comparison uses trajectories calculated using the wind fields from an atmospheric reanalysis. The study shows that using the "time-dependent" scheme over the "stepwise-stationary" scheme greatly improves accuracy with only a small increase in computational time. It is also found that with decreasing time steps the "stepwise-stationary" scheme becomes more accurate but at increased computational cost. The "time-dependent" scheme is therefore preferred over the "stepwise-stationary" scheme. However, when averaging over large ensembles of trajectories the two schemes are comparable, as intrinsic variability dominates over numerical errors. The "fixed GCM time step" is found to be less accurate than the "stepwise-stationary" scheme, even when considering averages over large ensembles.


2014 ◽  
Vol 7 (5) ◽  
pp. 1961-1977 ◽  
Author(s):  
H. Wan ◽  
P. J. Rasch ◽  
K. Zhang ◽  
Y. Qian ◽  
H. Yan ◽  
...  

Abstract. This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of high-resolution, costly, and complex climate models.


2017 ◽  
Vol 10 (4) ◽  
pp. 1733-1749 ◽  
Author(s):  
Kristofer Döös ◽  
Bror Jönsson ◽  
Joakim Kjellsson

Abstract. Three different trajectory schemes for oceanic and atmospheric general circulation models are compared in two different experiments. The theories of the trajectory schemes are presented showing the differential equations they solve and why they are mass conserving. One scheme assumes that the velocity fields are stationary for set intervals of time between saved model outputs and solves the trajectory path from a differential equation only as a function of space, i.e. stepwise stationary. The second scheme is a special case of the stepwise-stationary scheme, where velocities are assumed constant between general circulation model (GCM) outputs; it uses hence a fixed GCM time step. The third scheme uses a continuous linear interpolation of the fields in time and solves the trajectory path from a differential equation as a function of both space and time, i.e. a time-dependent scheme. The trajectory schemes are tested offline, i.e. using the already integrated and stored velocity fields from a GCM. The first comparison of the schemes uses trajectories calculated using the velocity fields from a high-resolution ocean general circulation model in the Agulhas region. The second comparison uses trajectories calculated using the wind fields from an atmospheric reanalysis. The study shows that using the time-dependent scheme over the stepwise-stationary scheme greatly improves accuracy with only a small increase in computational time. It is also found that with decreasing time steps the stepwise-stationary scheme becomes increasingly more accurate but at increased computational cost. The time-dependent scheme is therefore preferred over the stepwise-stationary scheme. However, when averaging over large ensembles of trajectories, the two schemes are comparable, as intrinsic variability dominates over numerical errors. The fixed GCM time step scheme is found to be less accurate than the stepwise-stationary scheme, even when considering averages over large ensembles.


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


1994 ◽  
Vol 29 (1-2) ◽  
pp. 53-61
Author(s):  
Ben Chie Yen

Urban drainage models utilize hydraulics of different levels. Developing or selecting a model appropriate to a particular project is not an easy task. Not knowing the hydraulic principles and numerical techniques used in an existing model, users often misuse and abuse the model. Hydraulically, the use of the Saint-Venant equations is not always necessary. In many cases the kinematic wave equation is inadequate because of the backwater effect, whereas in designing sewers, often Manning's formula is adequate. The flow travel time provides a guide in selecting the computational time step At, which in turn, together with flow unsteadiness, helps in the selection of steady or unsteady flow routing. Often the noninertia model is the appropriate model for unsteady flow routing, whereas delivery curves are very useful for stepwise steady nonuniform flow routing and for determination of channel capacity.


Sign in / Sign up

Export Citation Format

Share Document