scholarly journals State of the Art and Future Trends in Data Reduction for High-Performance Computing

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Reiner Anderl ◽  
Orkun Yaman

High Performance Computing (HPC) has become ubiquitous for simulations in the industrial context. To identify the requirements for integration of HPC-relevant data and processes a survey has been conducted concerning the German car manufacturers and service and component suppliers. This contribution presents the results of the evaluation and suggests an architecture concept to integrate data and workflows related with CAE and HPC-facilities in PLM. It describes the state of the art of HPC-applications within the simulation domain. Intensive efforts are currently invested on CAE-data management. However, an approach to systematic data management of HPC does not exist. This study states importance of an integrating approach for data management of HPC-applications and develops an architectural framework to implement HPC-data management into the existing PLM landscape. Requirements on key functionalities and interfaces are defined as well as a framework for a reference information model is conceptualized.


Acta Numerica ◽  
2012 ◽  
Vol 21 ◽  
pp. 379-474 ◽  
Author(s):  
J. J. Dongarra ◽  
A. J. van der Steen

This article describes the current state of the art of high-performance computing systems, and attempts to shed light on near-future developments that might prolong the steady growth in speed of such systems, which has been one of their most remarkable characteristics. We review the different ways devised to speed them up, both with regard to components and their architecture. In addition, we discuss the requirements for software that can take advantage of existing and future architectures.


Author(s):  
Marc Casas ◽  
Wilfried N Gansterer ◽  
Elias Wimmer

We investigate the usefulness of gossip-based reduction algorithms in a high-performance computing (HPC) context. We compare them to state-of-the-art deterministic parallel reduction algorithms in terms of fault tolerance and resilience against silent data corruption (SDC) as well as in terms of performance and scalability. New gossip-based reduction algorithms are proposed, which significantly improve the state-of-the-art in terms of resilience against SDC. Moreover, a new gossip-inspired reduction algorithm is proposed, which promises a much more competitive runtime performance in an HPC context than classical gossip-based algorithms, in particular for low accuracy requirements.


2019 ◽  
Vol 2019 ◽  
pp. 1-19 ◽  
Author(s):  
Pawel Czarnul ◽  
Jerzy Proficz ◽  
Adam Krzywaniak

The paper presents state of the art of energy-aware high-performance computing (HPC), in particular identification and classification of approaches by system and device types, optimization metrics, and energy/power control methods. System types include single device, clusters, grids, and clouds while considered device types include CPUs, GPUs, multiprocessor, and hybrid systems. Optimization goals include various combinations of metrics such as execution time, energy consumption, and temperature with consideration of imposed power limits. Control methods include scheduling, DVFS/DFS/DCT, power capping with programmatic APIs such as Intel RAPL, NVIDIA NVML, as well as application optimizations, and hybrid methods. We discuss tools and APIs for energy/power management as well as tools and environments for prediction and/or simulation of energy/power consumption in modern HPC systems. Finally, programming examples, i.e., applications and benchmarks used in particular works are discussed. Based on our review, we identified a set of open areas and important up-to-date problems concerning methods and tools for modern HPC systems allowing energy-aware processing.


Author(s):  
Al Geist ◽  
Daniel A Reed

Commodity clusters revolutionized high-performance computing when they first appeared two decades ago. As scale and complexity have grown, new challenges in reliability and systemic resilience, energy efficiency and optimization and software complexity have emerged that suggest the need for re-evaluation of current approaches. This paper reviews the state of the art and reflects on some of the challenges likely to be faced when building trans-petascale computing systems, using insights and perspectives drawn from operational experience and community debates.


Author(s):  
Ana Leiria ◽  
M. M. M. Moura

A broad view on the analysis of Doppler embolic signals is presented, uniting physics, engineering and computing, and clinical aspects. The overview of the field discusses the physiological significance of emboli and Doppler ultrasound with particular attention given to Transcranial Doppler; an outline of high-performance computing is presented, disambiguating the terminology and concepts used thereafter. The presentation of the major diagnostic approaches to Doppler embolic signals focuses on the most significant methods and techniques used to detect and classify embolic events including the clinical relevancy. Coverage of estimators such as time-frequency, time-scale, and displacement-frequency is included. The discussion of current approaches targets areas of identified need for improvement. A brief historical perspective of high-performance computing of Doppler blood flow signals and particularly Doppler embolic signals is accompanied by the reasoning behind the technological trends and approaches. The final remarks include, as a conclusion, a summary of the contribution and as future trends, some pathways hinting to where new developments might be expected.


Sign in / Sign up

Export Citation Format

Share Document