scholarly journals PRM52 Minimum Run-Time Requirements to Reduce Monte Carlo Error in Stochastic Simulations

2012 ◽  
Vol 15 (7) ◽  
pp. A469
Author(s):  
V. Foos ◽  
P. McEwan ◽  
A. Lloyd ◽  
J.L. Palmer ◽  
M. Lamotte ◽  
...  
2019 ◽  
Vol 622 ◽  
pp. A79 ◽  
Author(s):  
Mika Juvela

Context. Thermal dust emission carries information on physical conditions and dust properties in many astronomical sources. Because observations represent a sum of emission along the line of sight, their interpretation often requires radiative transfer (RT) modelling. Aims. We describe a new RT program, SOC, for computations of dust emission, and examine its performance in simulations of interstellar clouds with external and internal heating. Methods. SOC implements the Monte Carlo RT method as a parallel program for shared-memory computers. It can be used to study dust extinction, scattering, and emission. We tested SOC with realistic cloud models and examined the convergence and noise of the dust-temperature estimates and of the resulting surface-brightness maps. Results. SOC has been demonstrated to produce accurate estimates for dust scattering and for thermal dust emission. It performs well with both CPUs and GPUs, the latter providing a speed-up of processing time by up to an order of magnitude. In the test cases, accelerated lambda iterations (ALIs) improved the convergence rates but was also sensitive to Monte Carlo noise. Run-time refinement of the hierarchical-grid models did not help in reducing the run times required for a given accuracy of solution. The use of a reference field, without ALI, works more robustly, and also allows the run time to be optimised if the number of photon packages is increased only as the iterations progress. Conclusions. The use of GPUs in RT computations should be investigated further.


2019 ◽  
Author(s):  
Thaina Miranda da Costa ◽  
Gabriel Trova Cuba ◽  
Priscylla Guimarães Migueres Morgado ◽  
David P. Nicolau ◽  
Simone Aranha Nouér ◽  
...  

Abstract Background Staphylococcus aureus is one of the major causes of bloodstream infections (BSI) worldwide, representing a major challenge for public health due to its resistance profile. Higher vancomycin minimum inhibitory concentrations (MIC) in S. aureus are associated with treatment failure and defining optimal empiric options for BSIs in settings where these isolates are prevalent is rather challenging. In silico pharmacodynamic models based on stochastic simulations (Monte Carlo) are important tools to estimate best antimicrobial regimens in different scenarios. We aimed to compare the pharmacodynamic profiles of different antimicrobials regimens for the treatment of S. aureus BSI in an environment with high vancomycin MIC. Methods Steady-state drug area under the curve ratio to MIC (AUC⁄MIC) or the percent time above MIC (fT>MIC) were modeled using a 5000-patient Monte Carlo simulation to achieve pharmacodynamic exposures against 110 consecutive S. aureus isolates associated with BSI. Results Cumulative fractions of response (CFRs) against all S. aureus isolates were 98% for ceftaroline; 79% and 92% for daptomycin 6 mg/kg q24h and for the high dose of 10 mg/kg q24h, respectively; 77% for linezolid 600 mg every 12h when MIC was read according to CLSI M100-S26 instructions, and 64% when MIC was considered at the total growth inhibition; 65% and 86% for teicoplanin, three loading doses of 400 mg every 12h followed by 400 mg every 24h and for teicoplanin 400 mg every 12h, respectively; 61% and 76% for vancomycin 1000 mg every 12h and every 8h, respectively. Conclusions Based on this model, ceftaroline and high-dose daptomycin regimens delivered best pharmacodynamic exposures against S. aureus BSIs. Teicoplanin higher dose regimen achieves the best CFR (86%) among glycopeptides, although optimal threshold was not achieved, and vancomycin performance is critically affected by S. aureus vancomycin MIC ≥ 2 mg/L. Linezolid effectiveness (CFR of 73%) is also affected by high prevalence of isolates with higher MICs. These data show the need to continually evaluate the pharmacodynamic profiles of antimicrobials for empiric treatment of these infections.


2019 ◽  
Vol 11 (24) ◽  
pp. 7098 ◽  
Author(s):  
Jiri Horak ◽  
Jan Tesla ◽  
David Fojtik ◽  
Vit Vozenilek

Activity-based micro-scale simulation models for transport modelling provide better evaluations of public transport accessibility, enabling researchers to overcome the shortage of reliable real-world data. Current simulation systems face simplifications of personal behaviour, zonal patterns, non-optimisation of public transport trips (choice of the fastest option only), and do not work with real targets and their characteristics. The new TRAMsim system uses a Monte Carlo approach, which evaluates all possible public transport and walking origin–destination (O–D) trips for k-nearest stops within a given time interval, and selects appropriate variants according to the expected scenarios and parameters derived from local surveys. For the city of Ostrava, Czechia, two commuting models were compared based on simulated movements to reach (a) randomly selected large employers and (b) proportionally selected employers using an appropriate distance–decay impedance function derived from various combinations of conditions. The validation of these models confirms the relevance of the proportional gravity-based model. Multidimensional evaluation of the potential accessibility of employers elucidates issues in several localities, including a high number of transfers, high total commuting time, low variety of accessible employers and high pedestrian mode usage. The transport accessibility evaluation based on synthetic trips offers an improved understanding of local situations and helps to assess the impact of planned changes.


1994 ◽  
Vol 72 (2) ◽  
pp. 463-470 ◽  
Author(s):  
H. Beierbeck ◽  
L. T. J. Delbaere ◽  
M. Vandonselaar ◽  
R. U. Lemieux

Monte Carlo simulations of the hydration of the combining sites of the divalent lectin IV of Griffonia simplicifolia were carried out using the X-ray structure of the native lectin at 2.15 Å resolution. The regions of the combining sites are identical shallow polyamphiphilic cavities with a surface area of approximately 240 Å2 and an average depth of only about 2.2 Å. To reduce the CPU time requirements for Monte Carlo simulations of the hydration of the combining site of the native lectin, a fragment of the protein structure was examined that contained only 62 of the 243 amino acid residues and was present in both of the two subunits of the protein. This portion of the lectin, which encompassed the combining site and its immediate surroundings, was examined, employing 250 water molecules to near symmetrically cover an area of about 370 Å2 over and about the combining site with a density of 1 at 300 K. As was previously found in similar studies of the hydration of the Lewis b tetrasaccharide, the nonpolar regions are much less densely hydrated than the adjacent polar regions. This situation is considered to arise because of the hydrogen-bonding requirement for water molecules to bridge over nonpolar regions of varying dimensions. It is expected, therefore, that the association of complementary hydrophilic surfaces in aqueous solution must involve, in addition to the establishment of the usual intermolecular forces of attraction, a collapse of water structure over "flickering cavities" for return to bulk. This collapse can be expected to contribute to the driving force for association both through a decrease in enthalpy (higher density) and through an increase in entropy (greater disorder). This property of hydrated polyamphiphilic surfaces may contribute importantly to the driving force of all associations in aqueous solution since virtually all organic molecules are polyamphiphilic in character.


2016 ◽  
Vol 58 (6) ◽  
Author(s):  
Stefan Wildermann ◽  
Michael Bader ◽  
Lars Bauer ◽  
Marvin Damschen ◽  
Dirk Gabriel ◽  
...  

AbstractMulti-Processor Systems-on-a-Chip (MPSoCs) provide sufficient computing power for many applications in scientific as well as embedded applications. Unfortunately, when real-time requirements need to be guaranteed, applications suffer from the interference with other applications, uncertainty of dynamic workload and state of the hardware. Composable application/architecture design and timing analysis is therefore a must for guaranteeing real-time applications to satisfy their timing requirements independent from dynamic workload. Here, Invasive Computing is used as the key enabler for compositional timing analysis on MPSoCs, as it provides the required isolation of resources allocated to each application. On the basis of this paradigm, this work proposes a hybrid application mapping methodology that combines design-time analysis of application mappings with run-time management. Design space exploration delivers several resource reservation configurations with verified real-time guarantees for individual applications. These timing properties can then be guaranteed at run-time, as long as dynamic resource allocations comply with the offline analyzed resource configurations.This article describes our methodology and presents programming, optimization, analysis, and hardware techniques for enforcing timing predictability. A case study illustrates the timing-predictable management of real-time computer vision applications in dynamic robot system scenarios.


2013 ◽  
Vol 135 (8) ◽  
Author(s):  
Tao Ren ◽  
Michael F. Modest

Recently, it has become possible to conduct line-by-line (LBL) accurate radiative heat transfer calculations in spectrally highly nongray combustion systems using the Monte Carlo method. LBL accuracy, in principle, adds little to the computational load as compared to gray calculations. However, when employing the Monte Carlo method, the original scheme for choosing appropriate emission wavenumbers for statistical photon bundles is numerically expensive. An improved wavelength selection scheme has been applied to hypersonic plasmas for Monte Carlo solvers. However, directly applying this improved scheme to combustion gases may cause significant errors. In this paper, a hybrid scheme for wavenumber selection is proposed, significantly decreasing CPU requirements compared to previous work. The accuracy of the new method is established and its time requirements are compared against the previous method.


2013 ◽  
Vol 117 (1192) ◽  
pp. 585-603 ◽  
Author(s):  
Z. Wang ◽  
J. Wang

Abstract Air-to-ground strike has become one of main forms of modern warfare, and the guided bomb release is an important and key part of the attack. The guided bomb release planning aims to accomplish a precise target hit in enemy’s region while guarantee the pilot’s safety. We propose a robust Monte Carlo method by taking error perturbations into consideration, comparing to the traditional sequential quadratic programming method under extreme conditions. At the same time using a distributed virtual physics environment we can obtain much more detailed realism relative to the conventional simulator running on a single machine. The experimental results verify that Monte Carlo methods can improve hit probability and weapon efficiency significantly. Furthermore, the 3D visualised environment plays a very important role in training pilots, so this simulator will decrease the cost and time requirements of physical experiment that are not always compatible with strict military task.


Sign in / Sign up

Export Citation Format

Share Document