Using Lightweight Virtual Machines to Run High Performance Computing Applications: The Case of the Weather Research and Forecasting Model

Author(s):  
H. A. Duran-Limon ◽  
L. A. Silva-Banuelos ◽  
V. H. Tellez-Valdez ◽  
N. Parlavantzas ◽  
Ming Zhao
2020 ◽  
Vol 3 ◽  
pp. 30 ◽  
Author(s):  
Charles Molongwane ◽  
Mary-Jane M. Bopape ◽  
Ann Fridlind ◽  
Tshiamo Motshegwa ◽  
Toshihisa Matsui ◽  
...  

Background: Numerical weather and climate models rely on the use of microphysics schemes to simulate clouds and produce precipitation at convective scales. It is important that we understand how different microphysics schemes perform when simulating high impact weather to inform operational forecasting. Methods: Simulations a heavy rainfall event from 17-20 February 2017 over Botswana were made with the Weather Research and Forecasting (WRF) model using four different microphysics schemes. The schemes used were the Weather Research and Forecasting Single Moment 6-class scheme (WSM6); Weather Research and Forecasting Single Moment 5-class scheme (WSM5); Stony Brook University scheme (SBU-YLIN); and Thompson scheme. WSM5 is considered as the least sophisticated of the four schemes, while Thompson is the most sophisticated. Simulations were initialized and forced by the Global Forecast System (GFS), and configured with a grid spacing of 9km over an outer domain and 3km for a nested inner domain without the convection parameterization.  The simulations were produced using the University of Botswana and the Centre for High Performance Computing (CHPC) High Performance Computing (HPC) systems. Results: WSM5 and WSM6 simulations are mostly similar; the presence of graupel in WSM6 did not result in large differences in the rainfall simulations. SBU-YLIN simulated the least amount of rainfall, followed by Thompson. All the schemes captured the north-south rainfall gradient observed on 17 February, but with all simulations rainfall is simulated slightly south of where it was observed. All the schemes overestimated rainfall on 18 February over the central parts of Botswana, and underestimated rainfall on 19 February over most of Botswana. Conclusions: Simulations with different microphysics looked more similar to each other, than to observations. Future studies will test WRF configurations including a single nest over Botswana to determine the best configuration for operational forecasting by the Botswana Department of Meteorological Services.


2016 ◽  
Vol 31 (6) ◽  
pp. 1985-1996 ◽  
Author(s):  
David Siuta ◽  
Gregory West ◽  
Henryk Modzelewski ◽  
Roland Schigas ◽  
Roland Stull

Abstract As cloud-service providers like Google, Amazon, and Microsoft decrease costs and increase performance, numerical weather prediction (NWP) in the cloud will become a reality not only for research use but for real-time use as well. The performance of the Weather Research and Forecasting (WRF) Model on the Google Cloud Platform is tested and configurations and optimizations of virtual machines that meet two main requirements of real-time NWP are found: 1) fast forecast completion (timeliness) and 2) economic cost effectiveness when compared with traditional on-premise high-performance computing hardware. Optimum performance was found by using the Intel compiler collection with no more than eight virtual CPUs per virtual machine. Using these configurations, real-time NWP on the Google Cloud Platform is found to be economically competitive when compared with the purchase of local high-performance computing hardware for NWP needs. Cloud-computing services are becoming viable alternatives to on-premise compute clusters for some applications.


2020 ◽  
Vol 245 ◽  
pp. 07006
Author(s):  
Cécile Cavet ◽  
Martin Souchal ◽  
Sébastien Gadrat ◽  
Gilles Grasseau ◽  
Andrea Satirana ◽  
...  

The High Performance Computing (HPC) domain aims to optimize code in order to use the latest multicore and parallel technologies including specific processor instructions. In this computing framework, portability and reproducibility are key concepts. A way to handle these requirements is to use Linux containers. These “light virtual machines” allow to encapsulate applications within its environment in Linux processes. Containers have been recently rediscovered due to their abilities to provide both multi-infrastructure environnement for developers and system administrators and reproducibility due to image building file. Two container solutions are emerging: Docker for microservices and Singularity for computing applications. We present here the status of the ComputeOps project which has the goal to study the benefit of containers for HPC applications.


Author(s):  
Ouidad Achahbar ◽  
Mohamed Riduan Abid

The ongoing pervasiveness of Internet access is intensively increasing Big Data production. This, in turn, increases demand on compute power to process this massive data, and thus rendering High Performance Computing (HPC) into a high solicited service. Based on the paradigm of providing computing as a utility, the Cloud is offering user-friendly infrastructures for processing Big Data, e.g., High Performance Computing as a Service (HPCaaS). Still, HPCaaS performance is tightly coupled with the underlying virtualization technique since the latter is responsible for the creation of virtual machines that carry out data processing jobs. In this paper, the authors evaluate the impact of virtualization on HPCaaS. They track HPC performance under different Cloud virtualization platforms, namely KVM and VMware-ESXi, and compare it against physical clusters. Each tested cluster provided different performance trends. Yet, the overall analysis of the findings proved that the selection of virtualization technology can lead to significant improvements when handling HPCaaS.


Author(s):  
А.В. Баранов ◽  
Е.А. Киселёв

Организация облачных сервисов для высокопроизводительных вычислений затруднена, во-первых, по причине высоких накладных расходов на виртуализацию, во-вторых, из-за специфики систем управления заданиями и ресурсами в научных суперкомпьютерных центрах. В настоящей работе рассмотрен подход к построению облачных сервисов видов PaaS и SaaS, основанных на совместном функционировании облачной платформы Proxmox VE и системы управления прохождением параллельных заданий, применяемой в качестве менеджера ресурсов в Межведомственном суперкомпьютерном центре РАН. Purpose. The purpose of this paper is to develop methods and technologies for building high-performance computing cloud services in scientific supercomputer centers. Methodology.To build a cloud environment for high-performance scientific calculations (HPC), the corresponding three-level model and the method of combining flows of supercomputer tasks of various types were applied. Results.A high-level HPC cloud services technology based on the free Proxmox VE software platform has been developed. The Proxmox VE platform has been integrated with the domestic supercomputer job management system called SUPPZ. Experimental estimates of the overheads introduced in the high-performance computing process by the Proxmox components are obtained. Findings.An approach to the integration a supercomputer job management system and a virtualization platform is proposed. The presented approach is based on the representation of the supercomputer jobs as virtual machines or containers. Using the Proxmox VE platform as an example, the influence of a virtual environment on the execution time of parallel programs is investigated experimentally. The possibility of applying the proposed approach to building cloud services of the PaaS and SaaS type in scientific supercomputing centers of collective use is substantiated for a class of applications for which the overhead costs introduced by the Proxmox components are acceptable.


2019 ◽  
Vol 214 ◽  
pp. 07004
Author(s):  
Cécile Cavet ◽  
Aurélien Bailly-Reyre ◽  
David Chamont ◽  
Olivier Dadoun ◽  
Alexandre Dehne Garcia ◽  
...  

The High Performance Computing (HPC) domain aims to optimize code to use the latest multicore and parallel technologies including specific processor instructions. In this computing framework, portability and reproducibility are key concepts. A way to handle these requirements is to use Linux containers. These "light virtual machines" allow users to encapsulate applications within its environment in processes. Containers have been recently highlighted because they provide multi-infrastructure environnement for both developers and system administrators. Furthermore, they offer reproducibility due to image building files. Two container solutions are emerging: Docker for micro-services and Singularity for computing applications. We present here the Com-puteOps project which investigates the container benefits for HPC applications.


Sign in / Sign up

Export Citation Format

Share Document