scholarly journals Direct Numerical Simulation and Large Eddy Simulation on a Turbulent Wall-Bounded Flow Using Lattice Boltzmann Method and Multiple GPUs

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Xian Wang ◽  
Yanqin Shangguan ◽  
Naoyuki Onodera ◽  
Hiromichi Kobayashi ◽  
Takayuki Aoki

Direct numerical simulation (DNS) and large eddy simulation (LES) were performed on the wall-bounded flow atReτ=180using lattice Boltzmann method (LBM) and multiple GPUs (Graphic Processing Units). In the DNS, 8 K20M GPUs were adopted. The maximum number of meshes is6.7×107, which results in the nondimensional mesh size ofΔ+=1.41for the whole solution domain. It took 24 hours for GPU-LBM solver to simulate3×106LBM steps. The aspect ratio of resolution domain was tested to obtain accurate results for DNS. As a result, both the mean velocity and turbulent variables, such as Reynolds stress and velocity fluctuations, perfectly agree with the results of Kim et al. (1987) when the aspect ratios in streamwise and spanwise directions are 8 and 2, respectively. As for the LES, the local grid refinement technique was tested and then used. Using1.76×106grids and Smagorinsky constant(Cs)=0.13, good results were obtained. The ability and validity of LBM on simulating turbulent flow were verified.

Computing ◽  
2013 ◽  
Vol 96 (6) ◽  
pp. 479-501 ◽  
Author(s):  
Qinjian Li ◽  
Chengwen Zhong ◽  
Kai Li ◽  
Guangyong Zhang ◽  
Xiaowei Lu ◽  
...  

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Lei Xu ◽  
Anping Song ◽  
Wu Zhang

The lattice Boltzmann method (LBM) has become an attractive and promising approach in computational fluid dynamics (CFD). In this paper, parallel algorithm of D3Q19 multi-relaxation-time LBM with large eddy simulation (LES) is presented to simulate 3D flow past a sphere using multi-GPUs (graphic processing units). In order to deal with complex boundary, the judgement method of boundary lattice for complex boundary is devised. The 3D domain decomposition method is applied to improve the scalability for cluster, and the overlapping mode is introduced to hide the communication time by dividing the subdomain into two parts: inner part and outer part. Numerical results show good agreement with literature and the 12 Kepler K20M GPUs perform about 5100 million lattice updates per second, which indicates considerable scalability.


Sign in / Sign up

Export Citation Format

Share Document