scholarly journals ComputeOps: Container for High Performance Computing

2020 ◽  
Vol 245 ◽  
pp. 07006
Author(s):  
Cécile Cavet ◽  
Martin Souchal ◽  
Sébastien Gadrat ◽  
Gilles Grasseau ◽  
Andrea Satirana ◽  
...  

The High Performance Computing (HPC) domain aims to optimize code in order to use the latest multicore and parallel technologies including specific processor instructions. In this computing framework, portability and reproducibility are key concepts. A way to handle these requirements is to use Linux containers. These “light virtual machines” allow to encapsulate applications within its environment in Linux processes. Containers have been recently rediscovered due to their abilities to provide both multi-infrastructure environnement for developers and system administrators and reproducibility due to image building file. Two container solutions are emerging: Docker for microservices and Singularity for computing applications. We present here the status of the ComputeOps project which has the goal to study the benefit of containers for HPC applications.

2019 ◽  
Vol 214 ◽  
pp. 07004
Author(s):  
Cécile Cavet ◽  
Aurélien Bailly-Reyre ◽  
David Chamont ◽  
Olivier Dadoun ◽  
Alexandre Dehne Garcia ◽  
...  

The High Performance Computing (HPC) domain aims to optimize code to use the latest multicore and parallel technologies including specific processor instructions. In this computing framework, portability and reproducibility are key concepts. A way to handle these requirements is to use Linux containers. These "light virtual machines" allow users to encapsulate applications within its environment in processes. Containers have been recently highlighted because they provide multi-infrastructure environnement for both developers and system administrators. Furthermore, they offer reproducibility due to image building files. Two container solutions are emerging: Docker for micro-services and Singularity for computing applications. We present here the Com-puteOps project which investigates the container benefits for HPC applications.


2021 ◽  
Author(s):  
Mohsen Hadianpour ◽  
Ehsan Rezayat ◽  
Mohammad-Reza Dehaqani

Abstract Due to the significantly drastic progress and improvement in neurophysiological recording technologies, neuroscientists have faced various complexities dealing with unstructured large-scale neural data. In the neuroscience community, these complexities could create serious bottlenecks in storing, sharing, and processing neural datasets. In this article, we developed a distributed high-performance computing (HPC) framework called `Big neuronal data framework' (BNDF), to overcome these complexities. BNDF is based on open-source big data frameworks, Hadoop and Spark providing a flexible and scalable structure. We examined BNDF on three different large-scale electrophysiological recording datasets from nonhuman primate’s brains. Our results exhibited faster runtimes with scalability due to the distributed nature of BNDF. We compared BNDF results to a widely used platform like MATLAB in an equitable computational resource. Compared with other similar methods, using BNDF provides more than five times faster performance in spike sorting as a usual neuroscience application.


2000 ◽  
Vol 8 (2) ◽  
pp. 95-108 ◽  
Author(s):  
Joan M. Francioni ◽  
Cherri M. Pancake

Throughout 1998, the High Performance Debugging Forum worked on defining a base level standard for high performance debuggers. The standard had to meet the sometimes conflicting constraints of being useful to users, realistically implementable by developers, and architecturally independent across multiple platforms. To meet criteria for timeliness, the standard had to be defined in one year and in such a way that it could be implemented within an additional year. The Forum was successful, and in November 1998 released Version 1 of the HPD Standard. Implementations of the standard are currently underway. This paper presents an overview of Version 1 of the standard and an analysis of the process by which the standard was developed. The status of implementation efforts and plans for follow-on efforts are discussed as well.


2016 ◽  
Vol 31 (6) ◽  
pp. 1985-1996 ◽  
Author(s):  
David Siuta ◽  
Gregory West ◽  
Henryk Modzelewski ◽  
Roland Schigas ◽  
Roland Stull

Abstract As cloud-service providers like Google, Amazon, and Microsoft decrease costs and increase performance, numerical weather prediction (NWP) in the cloud will become a reality not only for research use but for real-time use as well. The performance of the Weather Research and Forecasting (WRF) Model on the Google Cloud Platform is tested and configurations and optimizations of virtual machines that meet two main requirements of real-time NWP are found: 1) fast forecast completion (timeliness) and 2) economic cost effectiveness when compared with traditional on-premise high-performance computing hardware. Optimum performance was found by using the Intel compiler collection with no more than eight virtual CPUs per virtual machine. Using these configurations, real-time NWP on the Google Cloud Platform is found to be economically competitive when compared with the purchase of local high-performance computing hardware for NWP needs. Cloud-computing services are becoming viable alternatives to on-premise compute clusters for some applications.


Sign in / Sign up

Export Citation Format

Share Document