scholarly journals Simulations of Complex and Microscopic Models of Cardiac Electrophysiology Powered by Multi-GPU Platforms

2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Bruno Gouvêa de Barros ◽  
Rafael Sachetto Oliveira ◽  
Wagner Meira ◽  
Marcelo Lobosco ◽  
Rodrigo Weber dos Santos

Key aspects of cardiac electrophysiology, such as slow conduction, conduction block, and saltatory effects have been the research topic of many studies since they are strongly related to cardiac arrhythmia, reentry, fibrillation, or defibrillation. However, to reproduce these phenomena the numerical models need to use subcellular discretization for the solution of the PDEs and nonuniform, heterogeneous tissue electric conductivity. Due to the high computational costs of simulations that reproduce the fine microstructure of cardiac tissue, previous studies have considered tissue experiments of small or moderate sizes and used simple cardiac cell models. In this paper, we develop a cardiac electrophysiology model that captures the microstructure of cardiac tissue by using a very fine spatial discretization (8 μm) and uses a very modern and complex cell model based on Markov chains for the characterization of ion channel’s structure and dynamics. To cope with the computational challenges, the model was parallelized using a hybrid approach: cluster computing and GPGPUs (general-purpose computing on graphics processing units). Our parallel implementation of this model using a multi-GPU platform was able to reduce the execution times of the simulations from more than 6 days (on a single processor) to 21 minutes (on a small 8-node cluster equipped with 16 GPUs, i.e., 2 GPUs per node).

Computation ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 50
Author(s):  
Stephan Lenz ◽  
Martin Geier ◽  
Manfred Krafczyk

The simulation of fire is a challenging task due to its occurrence on multiple space-time scales and the non-linear interaction of multiple physical processes. Current state-of-the-art software such as the Fire Dynamics Simulator (FDS) implements most of the required physics, yet a significant drawback of this implementation is its limited scalability on modern massively parallel hardware. The current paper presents a massively parallel implementation of a Gas Kinetic Scheme (GKS) on General Purpose Graphics Processing Units (GPGPUs) as a potential alternative modeling and simulation approach. The implementation is validated for turbulent natural convection against experimental data. Subsequently, it is validated for two simulations of fire plumes, including a small-scale table top setup and a fire on the scale of a few meters. We show that the present GKS achieves comparable accuracy to the results obtained by FDS. Yet, due to the parallel efficiency on dedicated hardware, our GKS implementation delivers a reduction of wall-clock times of more than an order of magnitude. This paper demonstrates the potential of explicit local schemes in massively parallel environments for the simulation of fire.


2013 ◽  
Vol 13 (3) ◽  
pp. 867-879 ◽  
Author(s):  
Stuart D. C. Walsh ◽  
Martin O. Saar

AbstractLattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs).Although recent programming tools dramatically improve the ease with which GPUbased applications can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures.This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package – LBHydra. The performance of the automatically generated code is compared to equivalent purposewritten codes for both single-phase,multiphase, andmulticomponent flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet moving through a porous medium with user generated lattice-Boltzmann models and subroutines.


Author(s):  
K. K. Abgarian ◽  
I. S. Kolbin

Abstract. The article discusses the calculation of the temperature regime in nanoscale AlAs/GaAs binary heterostructures. When modeling heat transfer in nanocomposites, it is important to take into account that heat dissipation in multilayer structures with layer sizes of the order of the mean free path of energy carriers (phonons and electrons) occurs not at the lattice, but at the layer boundaries (interfaces). In this regard, the use of classical numerical models based on the Fourier law is limited, because it gives significant errors. To obtain more accurate results, we used a model in which the heat distribution was assumed to be constant inside the layer, while the temperature was stepwise changed at the interfaces of the layers. A hybrid approach was used for the calculation: a finite−difference method with an implicit scheme for time approximation and a mesh−free model based on a set of radial basis functions for spatial approximation. The calculation of the parameters of the bases was carried out through the solution of the systems of linear algebraic equations. In this case, only weights of neuroelements were selected, and the centers and «widths» were fixed. As an approximator, a set of frequently used basic functions was considered. To increase the speed of calculations, the algorithm was parallelized. Calculation times were measured to estimate the performance gains using the parallel implementation of the method.


Author(s):  
S. M. Ord ◽  
B. Crosse ◽  
D. Emrich ◽  
D. Pallot ◽  
R. B. Wayth ◽  
...  

AbstractThe Murchison Widefield Array is a Square Kilometre Array Precursor. The telescope is located at the Murchison Radio–astronomy Observatory in Western Australia. The MWA consists of 4 096 dipoles arranged into 128 dual polarisation aperture arrays forming a connected element interferometer that cross-correlates signals from all 256 inputs. A hybrid approach to the correlation task is employed, with some processing stages being performed by bespoke hardware, based on Field Programmable Gate Arrays, and others by Graphics Processing Units housed in general purpose rack mounted servers. The correlation capability required is approximately 8 tera floating point operations per second. The MWA has commenced operations and the correlator is generating 8.3 TB day−1 of correlation products, that are subsequently transferred 700 km from the MRO to Perth (WA) in real-time for storage and offline processing. In this paper, we outline the correlator design, signal path, and processing elements and present the data format for the internal and external interfaces.


Author(s):  
Jacobo Córdova Aquino ◽  
Hugo I. Medellín-Castillo

Abstract The development of mathematical and numerical models of the human heart has become a matter of high relevance in the scientific community because of the difficulty of measuring the properties and performance of the cardiac tissue in vivo, and carrying out experimental tests under different healthy and pathological conditions. Several heart models have been proposed in the literature, but the results still differ from each other. In this paper, a new passive heart model to estimate the elastic behaviour of the left ventricular (LV) cardiac fibres is presented. The model is based on a hybrid approach that combines a theoretical approach to determine the equivalent material properties of each layer of the LV tissue, which represents an advantage with respect to other more elaborated or complex models, and an inverse finite element method (FEM) to determine the volume of the LV internal cavity under loading conditions. The proposed model uses the LV pressure and volume measurements along a real cardiac cycle as loading and target conditions, respectively. The results are analysed in terms of the elastic properties of the cardiac fibres and compared with the results obtained from other more complex and elaborated models reported in the literature. From this analysis it is observed that the new proposed model is reliable and able to estimate the elastic behaviour of the cardiac tissue.


2011 ◽  
Vol 28 (1) ◽  
pp. 1-14 ◽  
Author(s):  
W. van Straten ◽  
M. Bailes

Abstractdspsr is a high-performance, open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. Written primarily in C++, the library implements an extensive range of modular algorithms that can optionally exploit both multiple-core processors and general-purpose graphics processing units. After over a decade of research and development, dspsr is now stable and in widespread use in the community. This paper presents a detailed description of its functionality, justification of major design decisions, analysis of phase-coherent dispersion removal algorithms, and demonstration of performance on some contemporary microprocessor architectures.


2015 ◽  
Vol 1 (1) ◽  
pp. 413-417
Author(s):  
Eike M. Wülfers ◽  
Zhasur Zhamoliddinov ◽  
Olaf Dössel ◽  
Gunnar Seemann

AbstractUsing OpenCL, we developed a cross-platform software to compute electrical excitation conduction in cardiac tissue. OpenCL allowed the software to run parallelized and on different computing devices (e.g., CPUs and GPUs). We used the macroscopic mono-domain model for excitation conduction and an atrial myocyte model by Courtemanche et al. for ionic currents. On a CPU with 12 HyperThreading-enabled Intel Xeon 2.7 GHz cores, we achieved a speed-up of simulations by a factor of 1.6 against existing software that uses OpenMPI. On two high-end AMD FirePro D700 GPUs the OpenCL software ran 2.4 times faster than the OpenMPI implementation. The more nodes the discretized simulation domain contained, the higher speed-ups were achieved.


2021 ◽  
pp. 101375
Author(s):  
Elnaz Pouranbarani ◽  
Lucas Arantes Berg ◽  
Rafael Sachetto Oliveira ◽  
Rodrigo Weber dos Santos ◽  
Anders Nygren

2015 ◽  
Vol 17 (5) ◽  
pp. 1246-1270 ◽  
Author(s):  
C. F. Janßen ◽  
N. Koliha ◽  
T. Rung

AbstractThis paper presents a fast surface voxelization technique for the mapping of tessellated triangular surface meshes to uniform and structured grids that provide a basis for CFD simulations with the lattice Boltzmann method (LBM). The core algorithm is optimized for massively parallel execution on graphics processing units (GPUs) and is based on a unique dissection of the inner body shell. This unique definition necessitates a topology based neighbor search as a preprocessing step, but also enables parallel implementation. More specifically, normal vectors of adjacent triangular tessellations are used to construct half-angles that clearly separate the per-triangle regions. For each triangle, the grid nodes inside the axis-aligned bounding box (AABB) are tested for their distance to the triangle in question and for certain well-defined relative angles. The performance of the presented grid generation procedure is superior to the performance of the GPU-accelerated flow field computations per time step which allows efficient fluid-structure interaction simulations, without noticeable performance loss due to the dynamic grid update.


Sign in / Sign up

Export Citation Format

Share Document