The changing face of high performance computing in the United States

1996 ◽  
Vol 1 (3-4) ◽  
pp. 309-311
Author(s):  
Ann Hayes
1997 ◽  
Vol 3 (1) ◽  
pp. 224-232
Author(s):  
Roslyn Leibensperger ◽  
Susan Mehringer ◽  
Anne Trefethen ◽  
Malvin Kalos

We have designed and implemented a virtual workshop, a workshop where the participants are distributed across the United States (and occasionally further afield), learning by interacting with their own computers. In this paper we will describe our virtual workshop design and what we have learned from the first three hundred participants. The topics covered in our virtual workshop, by the nature of our mission, relate to technical, high-performance computing. We will not go into any detail about the contents of the workshop but describe instead the structure and implementation that we believe carry over to many other topics of education. The virtual workshop is a self-paced course. In addition to the on-line materials, we provide consulting support and access to our high-performance computer the IBM SP2. 


SIMULATION ◽  
2019 ◽  
Vol 96 (2) ◽  
pp. 221-232
Author(s):  
Mike Mikailov ◽  
Junshan Qiu ◽  
Fu-Jyh Luo ◽  
Stephen Whitney ◽  
Nicholas Petrick

Large-scale modeling and simulation (M&S) applications that do not require run-time inter-process communications can exhibit scaling problems when migrated to high-performance computing (HPC) clusters if traditional software parallelization techniques, such as POSIX multi-threading and the message passing interface, are used. A comprehensive approach for scaling M&S applications on HPC clusters has been developed and is called “computation segmentation.” The computation segmentation is based on the built-in array job facility of job schedulers. If used correctly for appropriate applications, the array job approach provides significant benefits that are not obtainable using other methods. The parallelization illustrated in this paper becomes quite complex in its own right when applied to extremely large M&S tasks, particularly due to the need for nested loops. At the United States Food and Drug Administration, the approach has provided unsurpassed efficiency, flexibility, and scalability for work that can be performed using embarrassingly parallel algorithms.


2018 ◽  
Author(s):  
Jiali Wang ◽  
Cheng Wang ◽  
Andrew Orr ◽  
Rao Kotamarthi

Abstract. Surface hydrological models must be calibrated for each application region. The Weather Research and Forecasting Hydrological system (WRF-Hydro) is a state-of-the-art numerical model that models the entire hydrological cycle based on physical principles. However, as with other hydrological models, WRF-Hydro parameterizes many physical processes. As a result, WRF-Hydro needs to be calibrated to optimize its output with respect to observations. However, when applied to a relatively large domain, both WRF-Hydro simulations and calibrations require intensive computing resources and are best performed in parallel. Typically, each physics parameterization requires a calibration process that works specifically with that model, and is not transferrable to a different process or model. Parameter Estimate Tool (PEST) is a flexible and generic calibration tool that can calibrate any numerical code. However, PEST in its current configuration is not designed to work on the current generation of massively parallel high-performance computing (HPC) clusters. This study ported the parallel PEST to HPCs and adapted it to work with the WRF-Hydro. The porting involved writing scripts to modify the workflow for different workload managers and job schedulers, as well as developing code to connect parallel PEST to WRF-Hydro. We developed a case study using a flood in the Midwestern United States in 2013 to test the operational feasibility of the HPC-enabled parallel PEST. We then evaluate the WRF-Hydro performance in water volume and timing of the flood event. We also assess the spatial transferability of the calibrated parameters for the study area. We finally discuss the scale-up capability of the HPC-enabled parallel PEST to provide insight for PEST's application to other hydrological models and earth system models on current and emerging HPC platforms. We find that, for this particular study, the HPC-enabled PEST calibration tool can speed up WRF-Hydro calibration by a factor of 30 compared to commonly-used sequential calibration approaches.


2019 ◽  
Vol 12 (8) ◽  
pp. 3523-3539 ◽  
Author(s):  
Jiali Wang ◽  
Cheng Wang ◽  
Vishwas Rao ◽  
Andrew Orr ◽  
Eugene Yan ◽  
...  

Abstract. The Weather Research and Forecasting Hydrological (WRF-Hydro) system is a state-of-the-art numerical model that models the entire hydrological cycle based on physical principles. As with other hydrological models, WRF-Hydro parameterizes many physical processes. Hence, WRF-Hydro needs to be calibrated to optimize its output with respect to observations for the application region. When applied to a relatively large domain, both WRF-Hydro simulations and calibrations require intensive computing resources and are best performed on multimode, multicore high-performance computing (HPC) systems. Typically, each physics-based model requires a calibration process that works specifically with that model and is not transferrable to a different process or model. The parameter estimation tool (PEST) is a flexible and generic calibration tool that can be used in principle to calibrate any of these models. In its existing configuration, however, PEST is not designed to work on the current generation of massively parallel HPC clusters. To address this issue, we ported the parallel PEST to HPCs and adapted it to work with WRF-Hydro. The porting involved writing scripts to modify the workflow for different workload managers and job schedulers, as well as to connect the parallel PEST to WRF-Hydro. To test the operational feasibility and the computational benefits of this first-of-its-kind HPC-enabled parallel PEST, we developed a case study using a flood in the midwestern United States in 2013. Results on a problem involving the calibration of 22 parameters show that on the same computing resources used for parallel WRF-Hydro, the HPC-enabled parallel PEST can speed up the calibration process by a factor of up to 15 compared with commonly used PEST in sequential mode. The speedup factor is expected to be greater with a larger calibration problem (e.g., more parameters to be calibrated or a larger size of study area).


Sign in / Sign up

Export Citation Format

Share Document