scholarly journals Lattice–Boltzmann simulations for complex geometries on high-performance computers

2020 ◽  
Vol 11 (3) ◽  
pp. 745-766
Author(s):  
Andreas Lintermann ◽  
Wolfgang Schröder
Soft Matter ◽  
2014 ◽  
Vol 10 (41) ◽  
pp. 8267-8275 ◽  
Author(s):  
Rodrigo Ledesma-Aguilar ◽  
Dominic Vella ◽  
Julia M. Yeomans

We validate lattice-Boltzmann simulations as a means of studying evaporation phenomena in complex geometries.


Author(s):  
Radhika S. Saksena ◽  
Marco D. Mazzeo ◽  
Stefan J. Zasada ◽  
Peter V. Coveney

We present very large-scale rheological studies of self-assembled cubic gyroid liquid crystalline phases in ternary mixtures of oil, water and amphiphilic species performed on petascale supercomputers using the lattice-Boltzmann method. These nanomaterials have found diverse applications in materials science and biotechnology, for example, in photovoltaic devices and protein crystallization. They are increasingly gaining importance as delivery vehicles for active agents in pharmaceuticals, personal care products and food technology. In many of these applications, the self-assembled structures are subject to flows of varying strengths and we endeavour to understand their rheological response with the objective of eventually predicting it under given flow conditions. Computationally, our lattice-Boltzmann simulations of ternary fluids are inherently memory- and data-intensive. Furthermore, our interest in dynamical processes necessitates remote visualization and analysis as well as the associated transfer and storage of terabytes of time-dependent data. These simulations are distributed on a high-performance grid infrastructure using the application hosting environment; we employ a novel parallel in situ visualization approach which is particularly suited for such computations on petascale resources. We present computational and I/O performance benchmarks of our application on three different petascale systems.


Author(s):  
Simon Zimny ◽  
Kannan Masilamani ◽  
Kartik Jain ◽  
Sabine Roller

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Benjamin T. Shealy ◽  
Mehrdad Yousefi ◽  
Ashwin T. Srinath ◽  
Melissa C. Smith ◽  
Ulf D. Schiller.

2019 ◽  
Vol 31 (20) ◽  
Author(s):  
Ruo‐Fan Qiu ◽  
Hai‐Ning Wang ◽  
Jian‐Feng Zhu ◽  
Rong‐Qian Chen ◽  
Cheng‐Xiang Zhu ◽  
...  

2010 ◽  
Vol 22 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Massimo Bernaschi ◽  
Massimiliano Fatica ◽  
Simone Melchionna ◽  
Sauro Succi ◽  
Efthimios Kaxiras

Author(s):  
E Calore ◽  
A Gabbana ◽  
SF Schifano ◽  
R Tripiccione

High-performance computing systems are more and more often based on accelerators. Computing applications targeting those systems often follow a host-driven approach, in which hosts offload almost all compute-intensive sections of the code onto accelerators; this approach only marginally exploits the computational resources available on the host CPUs, limiting overall performances. The obvious step forward is to run compute-intensive kernels in a concurrent and balanced way on both hosts and accelerators. In this paper, we consider exactly this problem for a class of applications based on lattice Boltzmann methods, widely used in computational fluid dynamics. Our goal is to develop just one program, portable and able to run efficiently on several different combinations of hosts and accelerators. To reach this goal, we define common data layouts enabling the code to exploit the different parallel and vector options of the various accelerators efficiently, and matching the possibly different requirements of the compute-bound and memory-bound kernels of the application. We also define models and metrics that predict the best partitioning of workloads among host and accelerator, and the optimally achievable overall performance level. We test the performance of our codes and their scaling properties using, as testbeds, HPC clusters incorporating different accelerators: Intel Xeon Phi many-core processors, NVIDIA GPUs, and AMD GPUs.


Sign in / Sign up

Export Citation Format

Share Document