scholarly journals JURECA: Data Centric and Booster Modules implementing the Modular Supercomputing Architecture at Jülich Supercomputing Centre

Author(s):  
Philipp Thörnig

JURECA is a Pre-Exascale Modular Supercomputer operated by Jülich Supercomputing Centre at Forschungszentrum Jülich. The system combines a flexible Data Centric (DC) module, based on the Atos BullSequana XH2000 with a selection of best-of-its-kind components, and a scalability-focused Booster module, delivered by Intel and Dell Technologies based on the Xeon Phi many-core processor. With its novel architecture, it supports a wide variety of high-performance computing and data analytics workloads.

Author(s):  
Dorian Krause ◽  
Philipp Thörnig

JURECA is a petaflop-scale modular supercomputer operated by Jülich Supercomputing Centre at Forschungszentrum Jülich. The system combines a flexible Cluster module, based on T-Platforms V-Class blades with a balanced selection of best of its kind components, with a scalability focused Booster module, delivered by Intel and Dell EMC based on the Xeon Phi many-core processor. With its novel architecture, it supports a wide variety of high-performance computing and data analytics workloads.


Author(s):  
Dorian Krause ◽  
Philipp Thörnig

JURECA is a petaflop-scale, general-purpose supercomputer operated by Jülich Supercomputing Centre at Forschungszentrum Jülich. Utilizing a flexible cluster architecture based on T-Platforms V-Class blades and a balanced selection of best of its kind components the system supports a wide variety of high-performance computing and data analytics workloads and offers a low entrance barrier for new users.


Author(s):  
Herbert Cornelius

For decades, HPC has established itself as an essential tool for discoveries, innovations and new insights in science, research and development, engineering and business across a wide range of application areas in academia and industry. Today High-Performance Computing is also well recognized to be of strategic and economic value – HPC matters and is transforming industries. This article will discuss new emerging technologies that are being developed for all areas of HPC: compute/processing, memory and storage, interconnect fabric, I/O and software to address the ongoing challenges in HPC such as balanced architecture, energy efficient high-performance, density, reliability, sustainability, and last but not least ease-of-use. Of specific interest are the challenges and opportunities for the next frontier in HPC envisioned around the 2020 timeframe: ExaFlops computing. We will also outline the new and emerging area of High Performance Data Analytics, Big Data Analytics using HPC, and discuss the emerging new delivery mechanism for HPC - HPC in the Cloud.


Author(s):  
Timothy Dykes ◽  
Claudio Gheller ◽  
Marzia Rivi ◽  
Mel Krokos

With the increasing size and complexity of data produced by large-scale numerical simulations, it is of primary importance for scientists to be able to exploit all available hardware in heterogenous high-performance computing environments for increased throughput and efficiency. We focus on the porting and optimization of Splotch, a scalable visualization algorithm, to utilize the Xeon Phi, Intel’s coprocessor based upon the new many integrated core architecture. We discuss steps taken to offload data to the coprocessor and algorithmic modifications to aid faster processing on the many-core architecture and make use of the uniquely wide vector capabilities of the device, with accompanying performance results using multiple Xeon Phi. Finally we compare performance against results achieved with the Graphics Processing Unit (GPU) based implementation of Splotch.


Sign in / Sign up

Export Citation Format

Share Document