scholarly journals A parallel domain decomposition method for large eddy simulation of blood flow in human artery with resistive boundary condition

2021 ◽  
pp. 105201
Author(s):  
Zi-Ju Liao ◽  
Shanlin Qin ◽  
Rongliang Chen ◽  
Xiao-Chuan Cai
2001 ◽  
Vol 446 ◽  
pp. 309-320 ◽  
Author(s):  
IVAN MARUSIC ◽  
GARY J. KUNKEL ◽  
FERNANDO PORTÉ-AGEL

An experimental investigation was conducted to study the wall boundary condition for large-eddy simulation (LES) of a turbulent boundary layer at Rθ = 3500. Most boundary condition formulations for LES require the specification of the instantaneous filtered wall shear stress field based upon the filtered velocity field at the closest grid point above the wall. Three conventional boundary conditions are tested using simultaneously obtained filtered wall shear stress and streamwise and wall-normal velocities, at locations nominally within the log region of the flow. This was done using arrays of hot-film sensors and X-wire probes. The results indicate that models based on streamwise velocity perform better than those using the wall-normal velocity, but overall significant discrepancies were found for all three models. A new model is proposed which gives better agreement with the shear stress measured at the wall. The new model is also based on the streamwise velocity but is formulated so as to be consistent with ‘outer-flow’ scaling similarity of the streamwise velocity spectra. It is therefore expected to be more generally applicable over a larger range of Reynolds numbers at any first-grid position within the log region of the boundary layer.


2001 ◽  
Vol 440 ◽  
pp. 75-116 ◽  
Author(s):  
LIAN SHEN ◽  
DICK K. P. YUE

In this paper we investigate the large-eddy simulation (LES) of the interaction between a turbulent shear flow and a free surface at low Froude numbers. The benchmark flow field is first solved by using direct numerical simulations (DNS) of the Navier–Stokes equations at fine (1282 × 192 grid) resolution, while the LES is performed at coarse resolution. Analysis of the ensemble of 25 DNS datasets shows that the amount of energy transferred from the grid scales to the subgrid scales (SGS) reduces significantly as the free surface is approached. This is a result of energy backscatter associated with the fluid vertical motions. Conditional averaging reveals that the energy backscatter occurs at the splat regions of coherent hairpin vortex structures as they connect to the free surface. The free-surface region is highly anisotropic at all length scales while the energy backscatter is carried out by the horizontal components of the SGS stress only. The physical insights obtained here are essential to the efficacious SGS modelling of LES for free-surface turbulence. In the LES, the SGS contribution to the Dirichlet pressure free-surface boundary condition is modelled with a dynamic form of the Yoshizawa (1986) expression, while the SGS flux that appears in the kinematic boundary condition is modelled by a dynamic scale-similarity model. For the SGS stress, we first examine the existing dynamic Smagorinsky model (DSM), which is found to capture the free-surface turbulence structure only roughly. Based on the special physics of free-surface turbulence, we propose two new SGS models: a dynamic free-surface function model (DFFM) and a dynamic anisotropic selective model (DASM). The DFFM correctly represents the reduction of the Smagorinsky coefficient near the surface and is found to capture the surface layer more accurately. The DASM takes into account both the anisotropy nature of free-surface turbulence and the dependence of energy backscatter on specific coherent vorticity mechanisms, and is found to produce substantially better surface signature statistics. Finally, we show that the combination of the new DFFM and DASM with a dynamic scale-similarity model further improves the results.


Author(s):  
H. G. Choi ◽  
S. W. Kang ◽  
J. Y. Yoo

For the large scale computation of turbulent flows around an arbitrarily shaped body, a parallel LES (large eddy simulation) code has been recently developed in which domain decomposition method is adopted. METIS and MPI (message passing interface) libraries are used for domain partitioning and data communication between processors, respectively. For unsteady computation of the incompressible Navier-Stokes equation, 4-step splitting finite element algorithm [1] is adopted and Smagorinsky or dynamic LES model can be chosen for the modeling of small eddies in turbulent flows. For the outlet (open) boundary condition, a Dirichlet boundary condition for the pressure is proposed. For the validation and performance-estimation of the parallel code, a three-dimensional laminar flow generated by natural convection inside a cube has been solved. We have confirmed that our code gives accurate results compared with previous studies. Regarding the speed-up of the code, the present parallel code with parallel block-Jacobi preconditioner is about 50 times faster than the corresponding serial code with 64 processors when approximately one million grid points are used. Most of the CPU time is consumed in solving elliptic type pressure equation. For the validation of LES models, turbulent channel flows are simulated at Re = 180, which is based on the channel half height and friction velocity using 51 × 71 × 71 grid system. It has been shown that our results agree well with the well-known results by Kim et al. [2] with less grid points than used by them in terms of time-averaged velocity field and velocity fluctuation. Lastly, we have solved the turbulent flow around MIRA (Motor Industry Research Association) model at Re = 1.6 × 106 which is based on the model height and inlet free stream velocity. Both Smagorinsky and dynamic models are tested, comparing estimated drag coefficients and pressure distribution along the model surface with the existing experimental data [3]. With the help of the parallel code developed in this study, we are able to obtain a unsteady solution of the turbulent flow field around a vehicle discretized by approximately three million grid points within two weeks when 32 IBM-SP2-processors are used. The calculated drag coefficient agrees better with the experimental result [3] than those using two equation turbulence models [4].


Sign in / Sign up

Export Citation Format

Share Document