A particle-based model to investigate nanoparticle diffusion in a 3D biofilm environment

2020 ◽  
Author(s):  
Bart Coppens ◽  
Jiří Pešek ◽  
Bart Smeets ◽  
Herman Ramon

<p>Biofilms exhibit heavily increased antibiotic tolerance in comparison to planktonic bacteria, leading to chronic complications during infection. This increased tolerance originates from extracellular polymeric substances (EPS). By binding the antibiotics, they limit access of active compounds to target sites. Embedding the antibiotics in polymer nanoparticles (NPs) provides a promising strategy to deal with this inactivation mechanism. Antibiotic compounds are then protected from unwanted interaction with the biofilm matrix. However, diffusion and subsequently penetration of NPs in the biofilm becomes the limiting factor. Chemical surface modifications would then allow to modify NP interaction with the biofilm and mediate deeper penetration. </p> <p>We present a particle-based model to investigate how structural differences in the biofilm impact NP diffusion, which can later be used to evaluate performance of various NP surface properties. We model the structure of the biofilm, diffusion of low NP concentrations and their interaction with the biofilm. Spherocylindrical bacteria are seeded according to empirically-derived structural parameters such as cell-cell distance, vertical and radial alignment. Interactions with the EPS matrix are represented as spherical zones with higher effective viscosity around the bacteria. We then use this setup to study how differences in biofilm organization and differences in matrix viscosity influence NP penetration depth. </p> <p>We show that sterical interaction with the bacteria alone is insufficient to explain the slowdown in diffusion found in single particle tracking (SPT) experiments. Higher effective EPS viscosity leads to lower NP penetration, but spread of the EPS zones were found to lower NP penetration more. These results are consistent with literature. </p> <p>The method we present here is suitable to evaluate the diffusion and entrapment of NPs in small concentrations in a heterogeneous biofilm environment, taking interactions with EPS and structure of the biofilm into account. Organization of the bacteria and the nature of interaction with EPS can be spatially varied and NPs can actively change the environment. This setup can be used on large scale biofilms, in contrast to computational fluid dynamics approaches, where the amount of computational cells would outscale the number of particles in the simulation. This particle-based model additionally allows to model interactions between NPs such as aggregation. The current coarse graining method for interactions between EPS and NPs allows to increase scale with less strain on the computational cost. This model will provide a solid base to study the fate of nanoparticles in highly heterogeneous biofilms and provide suggestions for NP surface properties and increase success rate for nanomedicine development. </p>

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daiji Ichishima ◽  
Yuya Matsumura

AbstractLarge scale computation by molecular dynamics (MD) method is often challenging or even impractical due to its computational cost, in spite of its wide applications in a variety of fields. Although the recent advancement in parallel computing and introduction of coarse-graining methods have enabled large scale calculations, macroscopic analyses are still not realizable. Here, we present renormalized molecular dynamics (RMD), a renormalization group of MD in thermal equilibrium derived by using the Migdal–Kadanoff approximation. The RMD method improves the computational efficiency drastically while retaining the advantage of MD. The computational efficiency is improved by a factor of $$2^{n(D+1)}$$ 2 n ( D + 1 ) over conventional MD where D is the spatial dimension and n is the number of applied renormalization transforms. We verify RMD by conducting two simulations; melting of an aluminum slab and collision of aluminum spheres. Both problems show that the expectation values of physical quantities are in good agreement after the renormalization, whereas the consumption time is reduced as expected. To observe behavior of RMD near the critical point, the critical exponent of the Lennard-Jones potential is extracted by calculating specific heat on the mesoscale. The critical exponent is obtained as $$\nu =0.63\pm 0.01$$ ν = 0.63 ± 0.01 . In addition, the renormalization group of dissipative particle dynamics (DPD) is derived. Renormalized DPD is equivalent to RMD in isothermal systems under the condition such that Deborah number $$De\ll 1$$ D e ≪ 1 .


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2242
Author(s):  
William A. Ramírez ◽  
Alessio Gizzi ◽  
Kevin L. Sack ◽  
Simonetta Filippi ◽  
Julius M. Guccione ◽  
...  

Computational cardiology is rapidly becoming the gold standard for innovative medical treatments and device development. Despite a worldwide effort in mathematical and computational modeling research, the complexity and intrinsic multiscale nature of the heart still limit our predictability power raising the question of the optimal modeling choice for large-scale whole-heart numerical investigations. We propose an extended numerical analysis among two different electrophysiological modeling approaches: a simplified phenomenological one and a detailed biophysical one. To achieve this, we considered three-dimensional healthy and infarcted swine heart geometries. Heterogeneous electrophysiological properties, fine-tuned DT-MRI -based anisotropy features, and non-conductive ischemic regions were included in a custom-built finite element code. We provide a quantitative comparison of the electrical behaviors during steady pacing and sustained ventricular fibrillation for healthy and diseased cases analyzing cardiac arrhythmias dynamics. Action potential duration (APD) restitution distributions, vortex filament counting, and pseudo-electrocardiography (ECG) signals were numerically quantified, introducing a novel statistical description of restitution patterns and ventricular fibrillation sustainability. Computational cost and scalability associated with the two modeling choices suggests that ventricular fibrillation signatures are mainly controlled by anatomy and structural parameters, rather than by regional restitution properties. Finally, we discuss limitations and translational perspectives of the different modeling approaches in view of large-scale whole-heart in silico studies.


Author(s):  
Cody Minks ◽  
Anke Richter

AbstractObjectiveResponding to large-scale public health emergencies relies heavily on planning and collaboration between law enforcement and public health officials. This study examines the current level of information sharing and integration between these domains by measuring the inclusion of public health in the law enforcement functions of fusion centers.MethodsSurvey of all fusion centers, with a 29.9% response rate.ResultsOnly one of the 23 responding fusion centers had true public health inclusion, a decrease from research conducted in 2007. Information sharing is primarily limited to information flowing out of the fusion center, with little public health information coming in. Most of the collaboration is done on a personal, informal, ad-hoc basis. There remains a large misunderstanding of roles, capabilities, and regulations by all parties (fusion centers and public health). The majority of the parties appear to be willing to work together, but there but there is no forward momentum to make these desires a reality. Funding and staffing issues seem to be the limiting factor for integration.ConclusionThese problems need to be urgently addressed to increase public health preparedness and enable a decisive and beneficial response to public health emergencies involving a homeland security response.


Author(s):  
Mahdi Esmaily Moghadam ◽  
Yuri Bazilevs ◽  
Tain-Yen Hsia ◽  
Alison Marsden

A closed-loop lumped parameter network (LPN) coupled to a 3D domain is a powerful tool that can be used to model the global dynamics of the circulatory system. Coupling a 0D LPN to a 3D CFD domain is a numerically challenging problem, often associated with instabilities, extra computational cost, and loss of modularity. A computationally efficient finite element framework has been recently proposed that achieves numerical stability without sacrificing modularity [1]. This type of coupling introduces new challenges in the linear algebraic equation solver (LS), producing an strong coupling between flow and pressure that leads to an ill-conditioned tangent matrix. In this paper we exploit this strong coupling to obtain a novel and efficient algorithm for the linear solver (LS). We illustrate the efficiency of this method on several large-scale cardiovascular blood flow simulation problems.


2006 ◽  
Vol 18 (12) ◽  
pp. 2959-2993 ◽  
Author(s):  
Eduardo Ros ◽  
Richard Carrillo ◽  
Eva M. Ortigosa ◽  
Boris Barbour ◽  
Rodrigo Agís

Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.


Author(s):  
David Forbes ◽  
Gary Page ◽  
Martin Passmore ◽  
Adrian Gaylard

This study is an evaluation of the computational methods in reproducing experimental data for a generic sports utility vehicle (SUV) geometry and an assessment on the influence of fixed and rotating wheels for this geometry. Initially, comparisons are made in the wake structure and base pressures between several CFD codes and experimental data. It was shown that steady-state RANS methods are unsuitable for this geometry due to a large scale unsteadiness in the wake caused by separation at the sharp trailing edge and rear wheel wake interactions. unsteady RANS (URANS) offered no improvements in wake prediction despite a significant increase in computational cost. The detached-eddy simulation (DES) and Lattice–Boltzmann methods showed the best agreement with the experimental results in both the wake structure and base pressure, with LBM running in approximately a fifth of the time for DES. The study then continues by analysing the influence of rotating wheels and a moving ground plane over a fixed wheel and ground plane arrangement. The introduction of wheel rotation and a moving ground was shown to increase the base pressure and reduce the drag acting on the vehicle when compared to the fixed case. However, when compared to the experimental standoff case, variations in drag and lift coefficients were minimal but misleading, as significant variations to the surface pressures were present.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Lei Luo ◽  
Chao Zhang ◽  
Yongrui Qin ◽  
Chunyuan Zhang

With the explosive growth of the data volume in modern applications such as web search and multimedia retrieval, hashing is becoming increasingly important for efficient nearest neighbor (similar item) search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.


2015 ◽  
Vol 102 ◽  
pp. 1484-1490 ◽  
Author(s):  
Daniel Schiochet Nasato ◽  
Christoph Goniva ◽  
Stefan Pirker ◽  
Christoph Kloss

Author(s):  
Vinay Sriram ◽  
David Kearney

High speed infrared (IR) scene simulation is used extensively in defense and homeland security to test sensitivity of IR cameras and accuracy of IR threat detection and tracking algorithms used commonly in IR missile approach warning systems (MAWS). A typical MAWS requires an input scene rate of over 100 scenes/second. Infrared scene simulations typically take 32 minutes to simulate a single IR scene that accounts for effects of atmospheric turbulence, refraction, optical blurring and charge-coupled device (CCD) camera electronic noise on a Pentium 4 (2.8GHz) dual core processor [7]. Thus, in IR scene simulation, the processing power of modern computers is a limiting factor. In this paper we report our research to accelerate IR scene simulation using high performance reconfigurable computing. We constructed a multi Field Programmable Gate Array (FPGA) hardware acceleration platform and accelerated a key computationally intensive IR algorithm over the hardware acceleration platform. We were successful in reducing the computation time of IR scene simulation by over 36%. This research acts as a unique case study for accelerating large scale defense simulations using a high performance multi-FPGA reconfigurable computer.


Sign in / Sign up

Export Citation Format

Share Document