ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments

Author(s):  
X. George Xu ◽  
Tianyu Liu ◽  
Lin Su ◽  
Xining Du ◽  
Matthew Riblett ◽  
...  
2015 ◽  
Vol 82 ◽  
pp. 2-9 ◽  
Author(s):  
X. George Xu ◽  
Tianyu Liu ◽  
Lin Su ◽  
Xining Du ◽  
Matthew Riblett ◽  
...  

2010 ◽  
Vol 03 (02) ◽  
pp. 91-102 ◽  
Author(s):  
TING LI ◽  
HUI GONG ◽  
QINGMING LUO

The Monte Carlo code MCML (Monte Carlo modeling of light transport in multi-layered tissue) has been the gold standard for simulations of light transport in multi-layer tissue, but it is ineffective in the presence of three-dimensional (3D) heterogeneity. New techniques have been attempted to resolve this problem, such as MCLS, which is derived from MCML, and tMCimg, which draws upon image datasets. Nevertheless, these approaches are insufficient because of their low precision or simplistic modeling. We report on the development of a novel model for photon migration in voxelized media (MCVM) with 3D heterogeneity. Voxel crossing detection and refractive-index-unmatched boundaries were considered to improve the precision and eliminate dependence on refractive-index-matched tissue. Using a semi-infinite homogeneous medium, steady-state and time-resolved simulations of MCVM agreed well with MCML, with high precision (~100%) for the total diffuse reflectance and total fractional absorption compared to those of tMCimg (< 70%). Based on a refractive-index-matched heterogeneous skin model, the results of MCVM were found to coincide with those of MCLS. Finally, MCVM was applied to a two-layered sphere with multi-inclusions, which is an example of a 3D heterogeneous media with refractive-index-unmatched boundaries. MCVM provided a reliable model for simulation of photon migration in voxelized 3D heterogeneous media, and it was developed to be a flexible and simple software tool that delivers high-precision results.


Author(s):  
Michael Hynes

A ubiquitous problem in physics is to determine expectation values of observables associated with a system. This problem is typically formulated as an integration of some likelihood over a multidimensional parameter space. In Bayesian analysis, numerical Markov Chain Monte Carlo (MCMC) algorithms are employed to solve such integrals using a fixed number of samples in the Markov Chain. In general, MCMC algorithms are computationally expensive for large datasets and have difficulties sampling from multimodal parameter spaces. An MCMC implementation that is robust and inexpensive for researchers is desired. Distributed computing systems have shown the potential to act as virtual supercomputers, such as in the SETI@home project in which millions of private computers participate. We propose that a clustered peer-to-peer (P2P) computer network serves as an ideal structure to run Markovian state exchange algorithms such as Parallel Tempering (PT). PT overcomes the difficulty in sampling from multimodal distributions by running multiple chains in parallel with different target distributions andexchanging their states in a Markovian manner. To demonstrate the feasibility of peer-to-peer Parallel Tempering (P2P PT), a simple two-dimensional dataset consisting of two Gaussian signals separated by a region of low probability was used in a Bayesian parameter fitting algorithm. A small connected peer-to-peer network was constructed using separate processes on a linux kernel, and P2P PT was applied to the dataset. These sampling results were compared with those obtained from sampling the parameter space with a single chain. It was found that the single chain was unable to sample both modes effectively, while the P2P PT method explored the target distribution well, visiting both modes approximately equally. Future work will involve scaling to many dimensions and large networks, and convergence conditions with highly heterogeneous computing capabilities of members within the network.


2018 ◽  
Vol 19 (1) ◽  
Author(s):  
Shakuntala Baichoo ◽  
Yassine Souilmi ◽  
Sumir Panji ◽  
Gerrit Botha ◽  
Ayton Meintjes ◽  
...  

2003 ◽  
Author(s):  
Sergey V. Babin ◽  
S. Borisov ◽  
E. Cheremukhin ◽  
Eugene Grachev ◽  
V. Korol ◽  
...  

1996 ◽  
Vol 4 (2-3) ◽  
pp. 97-117 ◽  
Author(s):  
R. Aversa ◽  
N. Mazzocca ◽  
U. Villano

2005 ◽  
Vol 15 (04) ◽  
pp. 423-438
Author(s):  
RENATO P. ISHII ◽  
RODRIGO F. DE MELLO ◽  
LUCIANO J. SENGER ◽  
MARCOS J. SANTANA ◽  
REGINA H. C. SANTANA ◽  
...  

This paper presents a new model for the evaluation of the impacts of processing operations resulting from the communication among processes. This model quantifies the traffic volume imposed on the communication network by means of the latency parameters and the overhead. Such parameters represent the load that each process imposes over the network and the delay on CPU, as a consequence of the network operations. This delay is represented on the model by means of metric measurements slowdown. The equations that quantify the costs involved in the processing operation and message exchange are defined. In the same way, equations to determine the maximum network bandwidth are used in the decision-making scheduling. The proposed model uses a constant that delimitates the communication network maximum allowed usage, this constant defines two possible scheduling techniques: group scheduling or through communication network. Such techniques are incorporated to the DPWP policy, generating an extension of this policy. Experimental and simulation results confirm the performance enhancement of parallel applications under supervision of the extended DPWP policy, compared to the executions supervised by the original DPWP.


Sign in / Sign up

Export Citation Format

Share Document