scholarly journals High Performance Computing - Power Application Programming Interface Specification.

2015 ◽  
Author(s):  
Suzanne Kelly ◽  
◽  
Kevin Pedretti ◽  
Ryan Grant ◽  
Stephen Olivier ◽  
...  
2011 ◽  
Vol 61 (1) ◽  
pp. 170-173 ◽  
Author(s):  
Daniel L. Ayres ◽  
Aaron Darling ◽  
Derrick J. Zwickl ◽  
Peter Beerli ◽  
Mark T. Holder ◽  
...  

Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1029
Author(s):  
Anabi Hilary Kelechi ◽  
Mohammed H. Alsharif ◽  
Okpe Jonah Bameyi ◽  
Paul Joan Ezra ◽  
Iorshase Kator Joseph ◽  
...  

Power-consuming entities such as high performance computing (HPC) sites and large data centers are growing with the advance in information technology. In business, HPC is used to enhance the product delivery time, reduce the production cost, and decrease the time it takes to develop a new product. Today’s high level of computing power from supercomputers comes at the expense of consuming large amounts of electric power. It is necessary to consider reducing the energy required by the computing systems and the resources needed to operate these computing systems to minimize the energy utilized by HPC entities. The database could improve system energy efficiency by sampling all the components’ power consumption at regular intervals and the information contained in a database. The information stored in the database will serve as input data for energy-efficiency optimization. More so, device workload information and different usage metrics are stored in the database. There has been strong momentum in the area of artificial intelligence (AI) as a tool for optimizing and processing automation by leveraging on already existing information. This paper discusses ideas for improving energy efficiency for HPC using AI.


2018 ◽  
Author(s):  
John M Macdonald ◽  
Christopher M Lalansingh ◽  
Christopher I Cooper ◽  
Anqi Yang ◽  
Felix Lam ◽  
...  

AbstractBackgroundMost biocomputing pipelines are run on clusters of computers. Each type of cluster has its own API (application programming interface). That API defines how a program that is to run on the cluster must request the submission, content and monitoring of jobs to be run on the cluster. Sometimes, it is desirable to run the same pipeline on different types of cluster. This can happen in situations including when:different labs are collaborating, but they do not use the same type of clustera pipeline is released to other labs as open source or commercial softwarea lab has access to multiple types of cluster, and wants to choose between them for scaling, cost or other purposesa lab is migrating their infrastructure from one cluster type to anotherduring testing or travelling, it is often desired to run on a single computerHowever, since each type of cluster has its own API, code that runs jobs on one type of cluster needs to be re-written if it is desired to run that application on a different type of cluster. To resolve this problem, we created a software module to generalize the submission of pipelines across computing environments, including local compute, clouds and clusters.ResultsHPCI (High Performance Computing Interface) is a Perl module that provides the interface to a standardized generic cluster.When the HPCI module is used, it accepts a parameter to specify the cluster type. The HPCI module uses this to load a driver HPCD∷<cluster>. This is used to translate the abstract HPCI interface to the specific software interface.Simply by changing the cluster parameter, the same pipeline can be run on a different type of cluster with no other changes.ConclusionThe HPCI module assists in writing Perl programs that can be run in different lab environments, with different site configuration requirements and different types of hardware clusters. Rather than having to re-write portions of the program, it is only necessary to change a configuration file.Using HPCI, an application can manage collections of jobs to be runs, specify ordering dependencies, detect success or failure of jobs run and allow automatic retry of failed jobs (allowing for the possibility of a changed configuration such as when the original attempt specified an inadequate memory allotment).


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2251
Author(s):  
Giuseppe Di Modica ◽  
Luca Evangelisti ◽  
Luca Foschini ◽  
Assimo Maris ◽  
Sonia Melandri

In the last years, the development of broadband chirped-pulse Fourier transform microwave spectrometers has revolutionized the field of rotational spectroscopy. Currently, it is possible to experimentally obtain a large quantity of spectra that would be difficult to analyze manually due to two main reasons. First, recent instruments allow obtaining a considerable amount of data in very short times, and second, it is possible to analyze complex mixtures of molecules that all contribute to the density of the spectra. AUTOFIT is a spectral assignment software application that was developed in 2013 to support and facilitate the analysis. Notwithstanding the benefits AUTOFIT brings in terms of automation of the analysis of the accumulated data, it still does not guarantee a good performance in terms of execution time because it leverages the computing power of a single computing machine. To cater to this requirement, we developed a parallel version of AUTOFIT, called HS-AUTOFIT, capable of running on high-performance computing (HPC) clusters to shorten the time to explore and analyze spectral big data. In this paper, we report some tests conducted on a real HPC cluster aimed at providing a quantitative assessment of HS-AUTOFIT’s scaling capabilities in a multi-node computing context. The collected results demonstrate the benefits of the proposed approach in terms of a significant reduction in computing time.


Sign in / Sign up

Export Citation Format

Share Document