scholarly journals Malleable parallelism with minimal effort for maximal throughput and maximal hardware load.

Author(s):  
Florian Spenke ◽  
Karsten Balzer ◽  
Sascha Frick ◽  
Bernd Hartke ◽  
Johannes M. Dieterich

A new parallel high-performance computing setup, which can use every little bit of computing resources left over by traditional scheduling, regardless how small or big it may be. This enables HPC providers to achieve 100 percent load on their machines at all times, and it enables HPC users to get substantial computing time on HPC systems that are "full" with traditional jobs.<br>

2019 ◽  
Author(s):  
Florian Spenke ◽  
Karsten Balzer ◽  
Sascha Frick ◽  
Bernd Hartke ◽  
Johannes M. Dieterich

A new parallel high-performance computing setup, which can use every little bit of computing resources left over by traditional scheduling, regardless how small or big it may be. This enables HPC providers to achieve 100 percent load on their machines at all times, and it enables HPC users to get substantial computing time on HPC systems that are "full" with traditional jobs.<br>


2018 ◽  
Author(s):  
Florian Spenke ◽  
Karsten Balzer ◽  
Sascha Frick ◽  
Bernd Hartke ◽  
Johannes M. Dieterich

A new parallel high-performance computing setup, which can use every little bit of computing resources left over by traditional scheduling, regardless how small or big it may be. This enables HPC providers to achieve 100 percent load on their machines at all times, and it enables HPC users to get substantial computing time on HPC systems that are "full" with traditional jobs.<br>


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2251
Author(s):  
Giuseppe Di Modica ◽  
Luca Evangelisti ◽  
Luca Foschini ◽  
Assimo Maris ◽  
Sonia Melandri

In the last years, the development of broadband chirped-pulse Fourier transform microwave spectrometers has revolutionized the field of rotational spectroscopy. Currently, it is possible to experimentally obtain a large quantity of spectra that would be difficult to analyze manually due to two main reasons. First, recent instruments allow obtaining a considerable amount of data in very short times, and second, it is possible to analyze complex mixtures of molecules that all contribute to the density of the spectra. AUTOFIT is a spectral assignment software application that was developed in 2013 to support and facilitate the analysis. Notwithstanding the benefits AUTOFIT brings in terms of automation of the analysis of the accumulated data, it still does not guarantee a good performance in terms of execution time because it leverages the computing power of a single computing machine. To cater to this requirement, we developed a parallel version of AUTOFIT, called HS-AUTOFIT, capable of running on high-performance computing (HPC) clusters to shorten the time to explore and analyze spectral big data. In this paper, we report some tests conducted on a real HPC cluster aimed at providing a quantitative assessment of HS-AUTOFIT’s scaling capabilities in a multi-node computing context. The collected results demonstrate the benefits of the proposed approach in terms of a significant reduction in computing time.


Author(s):  
Nichamon Naksinehaboon ◽  
Mihaela P[un ◽  
Raja Nassar ◽  
Chokchai Box Leangsuksun ◽  
Stephen Scott

Finding the failure rate of a system is a crucial step in high performance computing systems analysis. To deal with this problem, a fault tolerant mechanism, called checkpoint/ restart technique, was introduced. However, there are additional costs to perform this mechanism. Thus, we propose two models for different schemes (full and incremental checkpoint schemes). The models which are based on the reliability of the system are used to determine the checkpoint placements. Both proposed models consider a balance of between checkpoint overhead and the re-computing time. Due to the extra costs from each incremental checkpoint during the recovery period, a method to find the number of incremental checkpoints between two consecutive full checkpoints is given. Our simulation suggests that in most cases our incremental checkpoint model can reduce the waste time more than it is reduced by the full checkpoint model. The waste times produced by both models are in the range of 2% to 28% of the application completion time depending on the checkpoint overheads.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document