scholarly journals Exascale applications: skin in the game

Author(s):  
Francis Alexander ◽  
Ann Almgren ◽  
John Bell ◽  
Amitava Bhattacharjee ◽  
Jacqueline Chen ◽  
...  

As noted in Wikipedia, skin in the game refers to having ‘incurred risk by being involved in achieving a goal’, where ‘ skin is a synecdoche for the person involved, and game is the metaphor for actions on the field of play under discussion’. For exascale applications under development in the US Department of Energy Exascale Computing Project, nothing could be more apt, with the skin being exascale applications and the game being delivering comprehensive science-based computational applications that effectively exploit exascale high-performance computing technologies to provide breakthrough modelling and simulation and data science solutions. These solutions will yield high-confidence insights and answers to the most critical problems and challenges for the USA in scientific discovery, national security, energy assurance, economic competitiveness and advanced healthcare. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.

Author(s):  
Hartwig Anzt ◽  
Erik Boman ◽  
Rob Falgout ◽  
Pieter Ghysels ◽  
Michael Heroux ◽  
...  

Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


Author(s):  
Thomas M Evans ◽  
Julia C White

Multiphysics coupling presents a significant challenge in terms of both computational accuracy and performance. Achieving high performance on coupled simulations can be particularly challenging in a high-performance computing context. The US Department of Energy Exascale Computing Project has the mission to prepare mission-relevant applications for the delivery of the exascale computers starting in 2023. Many of these applications require multiphysics coupling, and the implementations must be performant on exascale hardware. In this special issue we feature six articles performing advanced multiphysics coupling that span the computational science domains in the Exascale Computing Project.


Author(s):  
Douglass E Post ◽  
Owen J Eslinger ◽  
Scott M Sundt ◽  
Megan Holland

The USA faces a multitude of threats to its national security and international interests in an era of exponential technology growth and unprecedented access by anyone with a smartphone. Traditionally, the acquisition of US defense systems has relied on sequential methods of conceptual design and development. While successful in the past, these methods are time consuming and in danger of creating vulnerability gaps that could limit or constrain US response options. The challenge is clear. Either the US Department of Defense (DoD) evolves the way it plans, develops, buys, and manufactures new weapons systems, or it cedes the high ground to a rapidly changing global environment. Adapting and expanding advances in high performance computing (HPC), developing and employing complex physics-based software tools for high fidelity modeling and simulation, and implementing a vision that combines these elements with other processes are critical enablers the DoD is pursuing. This paper describes the synergy of three major DoD efforts designed to address needs in the areas of acquisition program development and execution: the DoD High Performance Computing Modernization Program (HPCMP) (the DoD HPCMP began as an Office of the Secretary of Defense program in 1992; in October 2011, leadership transferred to the Assistant Secretary of the Army for Acquisition, Logistics and Technology; the US Army Corps of Engineers Engineer Research and Development Center manages the program, https://www.hpc.mil/index.php ) Computational Research and Engineering Acquisition Tools and Environments program; the Engineered Resilient Systems program; and the DoD Digital Engineering vision.


2021 ◽  
Vol 54 (1) ◽  
Author(s):  
Paul D. Bates

Every year flood events lead to thousands of casualties and significant economic damage. Mapping the areas at risk of flooding is critical to reducing these losses, yet until the last few years such information was available for only a handful of well-studied locations. This review surveys recent progress to address this fundamental issue through a novel combination of appropriate physics, efficient numerical algorithms, high-performance computing, new sources of big data, and model automation frameworks. The review describes the fluid mechanics of inundation and the models used to predict it, before going on to consider the developments that have led in the last five years to the creation of the first true fluid mechanics models of flooding over the entire terrestrial land surface. Expected final online publication date for the Annual Review of Fluid Mechanics, Volume 54 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Author(s):  
Francis J Alexander ◽  
James Ang ◽  
Jenna A Bilbrey ◽  
Jan Balewski ◽  
Tiernan Casey ◽  
...  

Rapid growth in data, computational methods, and computing power is driving a remarkable revolution in what variously is termed machine learning (ML), statistical learning, computational learning, and artificial intelligence. In addition to highly visible successes in machine-based natural language translation, playing the game Go, and self-driving cars, these new technologies also have profound implications for computational and experimental science and engineering, as well as for the exascale computing systems that the Department of Energy (DOE) is developing to support those disciplines. Not only do these learning technologies open up exciting opportunities for scientific discovery on exascale systems, they also appear poised to have important implications for the design and use of exascale computers themselves, including high-performance computing (HPC) for ML and ML for HPC. The overarching goal of the ExaLearn co-design project is to provide exascale ML software for use by Exascale Computing Project (ECP) applications, other ECP co-design centers, and DOE experimental facilities and leadership class computing facilities.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Florin Pop

Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.


Author(s):  
Gordon Bell ◽  
David H Bailey ◽  
Jack Dongarra ◽  
Alan H Karp ◽  
Kevin Walsh

The Gordon Bell Prize is awarded each year by the Association for Computing Machinery to recognize outstanding achievement in high-performance computing (HPC). The purpose of the award is to track the progress of parallel computing with particular emphasis on rewarding innovation in applying HPC to applications in science, engineering, and large-scale data analytics. Prizes may be awarded for peak performance or special achievements in scalability and time-to-solution on important science and engineering problems. Financial support for the US$10,000 award is provided through an endowment by Gordon Bell, a pioneer in high-performance and parallel computing. This article examines the evolution of the Gordon Bell Prize and the impact it has had on the field.


Sign in / Sign up

Export Citation Format

Share Document