exascale computing
Recently Published Documents


TOTAL DOCUMENTS

194
(FIVE YEARS 57)

H-INDEX

16
(FIVE YEARS 4)

2021 ◽  
Vol 94 (12) ◽  
Author(s):  
Jürgen Köfinger ◽  
Gerhard Hummer

Abstract The demands on the accuracy of force fields for classical molecular dynamics simulations are steadily growing as larger and more complex systems are studied over longer times. One way to meet these growing demands is to hand over the learning of force fields and their parameters to machines in a systematic (semi)automatic manner. Doing so, we can take full advantage of exascale computing, the increasing availability of experimental data, and advances in quantum mechanical computations and the calculation of experimental observables from molecular ensembles. Here, we discuss and illustrate the challenges one faces in this endeavor and explore a way forward by adapting the Bayesian inference of ensembles (BioEn) method [Hummer and Köfinger, J. Chem. Phys. (2015)] for force field parameterization. In the Bayesian inference of force fields (BioFF) method developed here, the optimization problem is regularized by a simplified prior on the force field parameters and an entropic prior acting on the ensemble. The latter compensates for the unavoidable over simplifications in the parameter prior. We determine optimal force field parameters using an iterative predictor–corrector approach, in which we run simulations, determine the reference ensemble using the weighted histogram analysis method (WHAM), and update the force field according to the BioFF posterior. We illustrate this approach for a simple polymer model, using the distance between two labeled sites as the experimental observable. By systematically resolving force field issues, instead of just reweighting a structural ensemble, the BioFF corrections extend to observables not included in ensemble reweighting. We envision future force field optimization as a formalized, systematic, and (semi)automatic machine-learning effort that incorporates a wide range of data from experiment and high-level quantum chemical calculations, and takes advantage of exascale computing resources. Graphic abstract


Author(s):  
Timothy C Germann

We provide an overview of the six co-design centers within the U.S. Department of Energy’s Exascale Computing Project, each of which is described in more detail in a separate paper in this special issue. We also give a perspective on the evolution of computational co-design.


2021 ◽  
Vol 13 (21) ◽  
pp. 11782
Author(s):  
Taha Al-Jody ◽  
Hamza Aagela ◽  
Violeta Holmes

There is a tradition at our university for teaching and research in High Performance Computing (HPC) systems engineering. With exascale computing on the horizon and a shortage of HPC talent, there is a need for new specialists to secure the future of research computing. Whilst many institutions provide research computing training for users within their particular domain, few offer HPC engineering and infrastructure-related courses, making it difficult for students to acquire these skills. This paper outlines how and why we are training students in HPC systems engineering, including the technologies used in delivering this goal. We demonstrate the potential for a multi-tenant HPC system for education and research, using novel container and cloud-based architecture. This work is supported by our previously published work that uses the latest open-source technologies to create sustainable, fast and flexible turn-key HPC environments with secure access via an HPC portal. The proposed multi-tenant HPC resources can be deployed on a “bare metal” infrastructure or in the cloud. An evaluation of our activities over the last five years is given in terms of recruitment metrics, skills audit feedback from students, and research outputs enabled by the multi-tenant usage of the resource.


Author(s):  
Francis J Alexander ◽  
James Ang ◽  
Jenna A Bilbrey ◽  
Jan Balewski ◽  
Tiernan Casey ◽  
...  

Rapid growth in data, computational methods, and computing power is driving a remarkable revolution in what variously is termed machine learning (ML), statistical learning, computational learning, and artificial intelligence. In addition to highly visible successes in machine-based natural language translation, playing the game Go, and self-driving cars, these new technologies also have profound implications for computational and experimental science and engineering, as well as for the exascale computing systems that the Department of Energy (DOE) is developing to support those disciplines. Not only do these learning technologies open up exciting opportunities for scientific discovery on exascale systems, they also appear poised to have important implications for the design and use of exascale computers themselves, including high-performance computing (HPC) for ML and ML for HPC. The overarching goal of the ExaLearn co-design project is to provide exascale ML software for use by Exascale Computing Project (ECP) applications, other ECP co-design centers, and DOE experimental facilities and leadership class computing facilities.


Author(s):  
Susan M Mniszewski ◽  
James Belak ◽  
Jean-Luc Fattebert ◽  
Christian FA Negre ◽  
Stuart R Slattery ◽  
...  

The Exascale Computing Project (ECP) is invested in co-design to assure that key applications are ready for exascale computing. Within ECP, the Co-design Center for Particle Applications (CoPA) is addressing challenges faced by particle-based applications across four “sub-motifs”: short-range particle–particle interactions (e.g., those which often dominate molecular dynamics (MD) and smoothed particle hydrodynamics (SPH) methods), long-range particle–particle interactions (e.g., electrostatic MD and gravitational N-body), particle-in-cell (PIC) methods, and linear-scaling electronic structure and quantum molecular dynamics (QMD) algorithms. Our crosscutting co-designed technologies fall into two categories: proxy applications (or “apps”) and libraries. Proxy apps are vehicles used to evaluate the viability of incorporating various types of algorithms, data structures, and architecture-specific optimizations and the associated trade-offs; examples include ExaMiniMD, CabanaMD, CabanaPIC, and ExaSP2. Libraries are modular instantiations that multiple applications can utilize or be built upon; CoPA has developed the Cabana particle library, PROGRESS/BML libraries for QMD, and the SWFFT and fftMPI parallel FFT libraries. Success is measured by identifiable “lessons learned” that are translated either directly into parent production application codes or into libraries, with demonstrated performance and/or productivity improvement. The libraries and their use in CoPA’s ECP application partner codes are also addressed.


2021 ◽  
Vol 4 (1) ◽  
pp. 126-131
Author(s):  
Ulphat Bakhishov ◽  

Distributed exascale computing systems are the idea of the HPC systems, that capable to perform one exaflop operations per second in dynamic and interactive nature without central managers. In such environment, each node should manage its own load itself and it should be found the basic rules of load distribution for all nodes because of being able to optimize the load distribution without central managers. In this paper proposed oscillation model for load distribution in fully distributed exascale systems and defined some parameters for this model and mentioned about feature works.


Author(s):  
Thomas M Evans ◽  
Andrew Siegel ◽  
Erik W Draeger ◽  
Jack Deslippe ◽  
Marianne M Francois ◽  
...  

The US Department of Energy Office of Science and the National Nuclear Security Administration initiated the Exascale Computing Project (ECP) in 2016 to prepare mission-relevant applications and scientific software for the delivery of the exascale computers starting in 2023. The ECP currently supports 24 efforts directed at specific applications and six supporting co-design projects. These 24 application projects contain 62 application codes that are implemented in three high-level languages—C, C++, and Fortran—and use 22 combinations of graphical processing unit programming models. The most common implementation language is C++, which is used in 53 different application codes. The most common programming models across ECP applications are CUDA and Kokkos, which are employed in 15 and 14 applications, respectively. This article provides a survey of the programming languages and models used in the ECP applications codebase that will be used to achieve performance on the future exascale hardware platforms.


Author(s):  
Thomas M Evans ◽  
Julia C White

Multiphysics coupling presents a significant challenge in terms of both computational accuracy and performance. Achieving high performance on coupled simulations can be particularly challenging in a high-performance computing context. The US Department of Energy Exascale Computing Project has the mission to prepare mission-relevant applications for the delivery of the exascale computers starting in 2023. Many of these applications require multiphysics coupling, and the implementations must be performant on exascale hardware. In this special issue we feature six articles performing advanced multiphysics coupling that span the computational science domains in the Exascale Computing Project.


Sign in / Sign up

Export Citation Format

Share Document