Current state and future trends in high performance computing and communications (HPCC) research in India

Author(s):  
P.K. Sinha ◽  
S.P. Dixit ◽  
N. Mohanram ◽  
S.C. Purohit ◽  
R.K. Arora ◽  
...  
Acta Numerica ◽  
2012 ◽  
Vol 21 ◽  
pp. 379-474 ◽  
Author(s):  
J. J. Dongarra ◽  
A. J. van der Steen

This article describes the current state of the art of high-performance computing systems, and attempts to shed light on near-future developments that might prolong the steady growth in speed of such systems, which has been one of their most remarkable characteristics. We review the different ways devised to speed them up, both with regard to components and their architecture. In addition, we discuss the requirements for software that can take advantage of existing and future architectures.


Author(s):  
Ana Leiria ◽  
M. M. M. Moura

A broad view on the analysis of Doppler embolic signals is presented, uniting physics, engineering and computing, and clinical aspects. The overview of the field discusses the physiological significance of emboli and Doppler ultrasound with particular attention given to Transcranial Doppler; an outline of high-performance computing is presented, disambiguating the terminology and concepts used thereafter. The presentation of the major diagnostic approaches to Doppler embolic signals focuses on the most significant methods and techniques used to detect and classify embolic events including the clinical relevancy. Coverage of estimators such as time-frequency, time-scale, and displacement-frequency is included. The discussion of current approaches targets areas of identified need for improvement. A brief historical perspective of high-performance computing of Doppler blood flow signals and particularly Doppler embolic signals is accompanied by the reasoning behind the technological trends and approaches. The final remarks include, as a conclusion, a summary of the contribution and as future trends, some pathways hinting to where new developments might be expected.


Author(s):  
А.С. Антонов ◽  
И.В. Афанасьев ◽  
Вл.В. Воеводин

В данной статье представлен обзор современного состояния суперкомпьютерной техники. Обзор сделан с разных точек зрения — начиная от особенностей построения современных вычислительных устройств до особенностей архитектуры больших суперкомпьютерных комплексов. В данный обзор вошли описания самых мощных суперкомпьютеров мира и России по состоянию на начало 2021 г., а также некоторых менее мощных систем, интересных с других точек зрения. Также делается акцент на тенденциях развития суперкомпьютерной отрасли и описываются наиболее известные проекты построения будущих экзафлопсных суперкомпьютеров. This paper provides an overview of the current state of supercomputer technology. The review is done from different points of view — from the construction features of modern computing devices to the features of the architecture of large supercomputer complexes. This review includes descriptions of the most powerful supercomputers in the world and Russia since the early of 2021 as well as some less powerful systems that are interesting from other points of view. It also focuses on the development trends of the supercomputer industry and describes the most famous projects for building future exascale supercomputers.


Author(s):  
Yaser Jararweh ◽  
Moath Jarrah ◽  
Abdelkader Bousselham

Current state-of-the-art GPU-based systems offer unprecedented performance advantages through accelerating the most compute-intensive portions of applications by an order of magnitude. GPU computing presents a viable solution for the ever-increasing complexities in applications and the growing demands for immense computational resources. In this paper the authors investigate different platforms of GPU-based systems, starting from the Personal Supercomputing (PSC) to cloud-based GPU systems. The authors explore and evaluate the GPU-based platforms and the authors present a comparison discussion against the conventional high performance cluster-based computing systems. The authors' evaluation shows potential advantages of using GPU-based systems for high performance computing applications while meeting different scaling granularities.


2016 ◽  
pp. 2373-2384
Author(s):  
Yaser Jararweh ◽  
Moath Jarrah ◽  
Abdelkader Bousselham

Current state-of-the-art GPU-based systems offer unprecedented performance advantages through accelerating the most compute-intensive portions of applications by an order of magnitude. GPU computing presents a viable solution for the ever-increasing complexities in applications and the growing demands for immense computational resources. In this paper the authors investigate different platforms of GPU-based systems, starting from the Personal Supercomputing (PSC) to cloud-based GPU systems. The authors explore and evaluate the GPU-based platforms and the authors present a comparison discussion against the conventional high performance cluster-based computing systems. The authors' evaluation shows potential advantages of using GPU-based systems for high performance computing applications while meeting different scaling granularities.


Author(s):  
Micha vor dem Berge ◽  
Wolfgang Christmann

This chapter aims to describe the current state and the ongoing efforts to integrate a monitoring and controlling architecture in modern computing environments. Benefits to be gained from this architecture include: More knowledge about the available computing resources and the capability to establish a management platform that optimizes energy efficiency as well as the availability of the computer environment in a time of a rapidly growing need for computer systems. In two use cases, namely server virtualisation and High Performance Computing (HPC), implementations of a management system for energy-efficiency were evaluated, showing that it is possible to increase the energy-efficiency by more than 10%, depending on the use case and the workload.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document