scholarly journals WESTPA 2.0: High-performance upgrades for weighted ensemble simulations and analysis of longer-timescale applications

2021 ◽  
Author(s):  
John D. Russo ◽  
She Zhang ◽  
Jeremy M. G. Leung ◽  
Anthony T. Bogetti ◽  
Jeff P. Thompson ◽  
...  

ABSTRACTThe weighted ensemble (WE) family of methods is one of several statistical-mechanics based path sampling strategies that can provide estimates of key observables (rate constants, pathways) using a fraction of the time required by direct simulation methods such as molecular dynamics or discrete-state stochastic algorithms. WE methods oversee numerous parallel trajectories using intermittent overhead operations at fixed time intervals, enabling facile interoperability with any dynamics engine. Here, we report on major upgrades to the WESTPA software package, an open-source, high-performance framework that implements both basic and recently developed WE methods. These upgrades offer substantial improvements over traditional WE. Key features of the new WESTPA 2.0 software enhance efficiency and ease of use: an adaptive binning scheme for more efficient surmounting of large free energy barriers, streamlined handling of large simulation datasets, exponentially improved analysis of kinetics, and developer-friendly tools for creating new WE methods, including a Python API and resampler module for implementing both binned and “binless” WE strategies.

2020 ◽  
Vol 13 (3) ◽  
pp. 313-318 ◽  
Author(s):  
Dhanapal Angamuthu ◽  
Nithyanandam Pandian

<P>Background: The cloud computing is the modern trend in high-performance computing. Cloud computing becomes very popular due to its characteristic of available anywhere, elasticity, ease of use, cost-effectiveness, etc. Though the cloud grants various benefits, it has associated issues and challenges to prevent the organizations to adopt the cloud. </P><P> Objective: The objective of this paper is to cover the several perspectives of Cloud Computing. This includes a basic definition of cloud, classification of the cloud based on Delivery and Deployment Model. The broad classification of the issues and challenges faced by the organization to adopt the cloud computing model are explored. Examples for the broad classification are Data Related issues in the cloud, Service availability related issues in cloud, etc. The detailed sub-classifications of each of the issues and challenges discussed. The example sub-classification of the Data Related issues in cloud shall be further classified into Data Security issues, Data Integrity issue, Data location issue, Multitenancy issues, etc. This paper also covers the typical problem of vendor lock-in issue. This article analyzed and described the various possible unique insider attacks in the cloud environment. </P><P> Results: The guideline and recommendations for the different issues and challenges are discussed. The most importantly the potential research areas in the cloud domain are explored. </P><P> Conclusion: This paper discussed the details on cloud computing, classifications and the several issues and challenges faced in adopting the cloud. The guideline and recommendations for issues and challenges are covered. The potential research areas in the cloud domain are captured. This helps the researchers, academicians and industries to focus and address the current challenges faced by the customers.</P>


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


2021 ◽  
Author(s):  
Megan Grodowitz ◽  
Luis E. Pena ◽  
Curtis Dunham ◽  
Dong Zhong ◽  
Pavel Shamis ◽  
...  

2001 ◽  
Vol 356 (1412) ◽  
pp. 1209-1228 ◽  
Author(s):  
Nigel H. Goddard ◽  
Michael Hucka ◽  
Fred Howell ◽  
Hugo Cornelis ◽  
Kavita Shankar ◽  
...  

Biological nervous systems and the mechanisms underlying their operation exhibit astonishing complexity. Computational models of these systems have been correspondingly complex. As these models become ever more sophisticated, they become increasingly difficult to define, comprehend, manage and communicate. Consequently, for scientific understanding of biological nervous systems to progress, it is crucial for modellers to have software tools that support discussion, development and exchange of computational models. We describe methodologies that focus on these tasks, improving the ability of neuroscientists to engage in the modelling process. We report our findings on the requirements for these tools and discuss the use of declarative forms of model description—equivalent to object–oriented classes and database schema—which we call templates. We introduce NeuroML, a mark–up language for the neurosciences which is defined syntactically using templates, and its specific component intended as a common format for communication between modelling–related tools. Finally, we propose a template hierarchy for this modelling component of NeuroML, sufficient for describing models ranging in structural levels from neuron cell membranes to neural networks. These templates support both a framework for user–level interaction with models, and a high–performance framework for efficient simulation of the models.


Author(s):  
Gabor E. Gevay ◽  
Tilmann Rabl ◽  
Sebastian Bres ◽  
Lorand Madai-Tahy ◽  
Jorge-Arnulfo Quiane-Ruiz ◽  
...  

Author(s):  
Herbert Cornelius

For decades, HPC has established itself as an essential tool for discoveries, innovations and new insights in science, research and development, engineering and business across a wide range of application areas in academia and industry. Today High-Performance Computing is also well recognized to be of strategic and economic value – HPC matters and is transforming industries. This article will discuss new emerging technologies that are being developed for all areas of HPC: compute/processing, memory and storage, interconnect fabric, I/O and software to address the ongoing challenges in HPC such as balanced architecture, energy efficient high-performance, density, reliability, sustainability, and last but not least ease-of-use. Of specific interest are the challenges and opportunities for the next frontier in HPC envisioned around the 2020 timeframe: ExaFlops computing. We will also outline the new and emerging area of High Performance Data Analytics, Big Data Analytics using HPC, and discuss the emerging new delivery mechanism for HPC - HPC in the Cloud.


Sign in / Sign up

Export Citation Format

Share Document