A look back on 30 years of the Gordon Bell Prize

Author(s):  
Gordon Bell ◽  
David H Bailey ◽  
Jack Dongarra ◽  
Alan H Karp ◽  
Kevin Walsh

The Gordon Bell Prize is awarded each year by the Association for Computing Machinery to recognize outstanding achievement in high-performance computing (HPC). The purpose of the award is to track the progress of parallel computing with particular emphasis on rewarding innovation in applying HPC to applications in science, engineering, and large-scale data analytics. Prizes may be awarded for peak performance or special achievements in scalability and time-to-solution on important science and engineering problems. Financial support for the US$10,000 award is provided through an endowment by Gordon Bell, a pioneer in high-performance and parallel computing. This article examines the evolution of the Gordon Bell Prize and the impact it has had on the field.

2014 ◽  
Vol 556-562 ◽  
pp. 4746-4749
Author(s):  
Bin Chu ◽  
Da Lin Jiang ◽  
Bo Cheng

This paper concerns about Large-scale mosaic for remote sensed images. Base on High Performance Computing system, we offer a method to decompose the problem and integrate them with logical and physical relationship. The mosaic of Large-scale remote sensed images has been improved both at performance and effectiveness.


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Ying-Chih Lin ◽  
Chin-Sheng Yu ◽  
Yen-Jen Lin

Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.


2019 ◽  
Vol 27 (3) ◽  
pp. 263-267
Author(s):  
Alexander S. Ayriyan

In this note we discuss the impact of development of architecture and technology of parallel computing on the typical life-cycle of the computational experiment. In particular, it is argued that development and installation of high-performance computing systems is indeed important itself regardless of specific scientific tasks, since the presence of cutting-age HPC systems within an academic infrastructure gives wide possibilities and stimulates new researches.


2019 ◽  
Vol 27 (3) ◽  
pp. 263-267
Author(s):  
Alexander S. Ayriyan

In this note we discuss the impact of development of architecture and technology of parallel computing on the typical life-cycle of the computational experiment. In particular, it is argued that development and installation of high-performance computing systems is indeed important itself regardless of specific scientific tasks, since the presence of cutting-age HPC systems within an academic infrastructure gives wide possibilities and stimulates new researches.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Bingzheng Li ◽  
Jinchen Xu ◽  
Zijing Liu

With the development of high-performance computing and big data applications, the scale of data transmitted, stored, and processed by high-performance computing cluster systems is increasing explosively. Efficient compression of large-scale data and reducing the space required for data storage and transmission is one of the keys to improving the performance of high-performance computing cluster systems. In this paper, we present SW-LZMA, a parallel design and optimization of LZMA based on the Sunway 26010 heterogeneous many-core processor. Combined with the characteristics of SW26010 processors, we analyse the storage space requirements, memory access characteristics, and hotspot functions of the LZMA algorithm and implement the thread-level parallelism of the LZMA algorithm based on Athread interface. Furthermore, we make a fine-grained layout of LDM address space to achieve DMA double buffer cyclic sliding window algorithm, which optimizes the performance of SW-LZMA. The experimental results show that compared with the serial baseline implementation of LZMA, the parallel LZMA algorithm obtains a maximum speedup ratio of 4.1 times using the Silesia corpus benchmark, while on the large-scale data set, speedup is 5.3 times.


Author(s):  
Renan Souza ◽  
Marta Mattoso ◽  
Patrick Valduriez

Large-scale workflows that execute on High-Performance Computing machines need to be dynamically steered by users. This means that users analyze big data files, assess key performance indicators, fine-tune parameters, and evaluate the tuning impacts while the workflows generate multiple files, which is challenging. If one does not keep track of such interactions (called user steering actions), it may be impossible to understand the consequences of steering actions and to reproduce the results. This thesis proposes a generic approach to enable tracking user steering actions by characterizing, capturing, relating, and analyzing them by leveraging provenance data management concepts. Experiments with real users show that the approach enabled the understanding of the impact of steering actions while incurring negligible overhead.


Author(s):  
Zhi Shang

Usually simulations on environment flood issues will face the scalability problem of large scale parallel computing. The plain parallel technique based on pure MPI is difficult to have a good scalability due to the large number of domain partitioning. Therefore, the hybrid programming using MPI and OpenMP is introduced to deal with the issue of scalability. This kind of parallel technique can give a full play to the strengths of MPI and OpenMP. During the parallel computing, OpenMP is employed by its efficient fine grain parallel computing and MPI is used to perform the coarse grain parallel domain partitioning for data communications. Through the tests, the hybrid MPI/OpenMP parallel programming was used to renovate the finite element solvers in the BIEF library of Telemac. It was found that the hybrid programming is able to provide helps for Telemac to deal with the scalability issue.


Sign in / Sign up

Export Citation Format

Share Document