scholarly journals EUROfusion-theory and advanced simulation coordination (E-TASC): programme and the role of high performance computing

Author(s):  
Xavier L LITAUDON ◽  
Frank Jenko ◽  
D. Borba ◽  
Dmitriy V. Borodin ◽  
Bastiaan Braams ◽  
...  

Abstract The paper is a written summary of an overview oral presentation given at the 1st Spanish Fusion HPC Workshop that took place on the 27th November 2020 as an online event. Given that over the next few years ITER will move to its operation phase and the European-DEMO design will be significantly advanced, the EUROfusion consortium has initiated a coordination effort in theory and advanced simulation to address some of the challenges of the fusion research in Horizon EUROPE (2021-2027), i.e. the next EU Framework Programme for Research and Technological Development. This initiative has been called E-TASC that stands for EUROfusion-Theory and Advanced Simulation Coordination. The general and guiding principles of E-TASC are summarized in the paper. In addition, an overview of the scientific results obtained in a pilot phase (2019-2020) of E-TASC are provided while highlighting the importance of the required progress in computational methods and HPC techniques. In the initial phase, five pilot theory and simulation tasks were initiated: 1. Towards a validated predictive capability of the L-H transition and pedestal physics; 2. Electron runaway in tokamak disruptions in the presence of massive material injection; 3. Fast code for the calculation of neoclassical toroidal viscosity in stellarators and tokamaks; 4. Development of a neutral gas kinetics modular code; 5. European edge and boundary code for reactor-relevant devices. In this paper we report on recent progress made by each of these projects.

2019 ◽  
Vol 214 ◽  
pp. 07012 ◽  
Author(s):  
Nikita Balashov ◽  
Maxim Bashashin ◽  
Pavel Goncharov ◽  
Ruslan Kuchumov ◽  
Nikolay Kutovskiy ◽  
...  

Cloud computing has become a routine tool for scientists in many fields. The JINR cloud infrastructure provides JINR users with computational resources to perform various scientific calculations. In order to speed up achievements of scientific results the JINR cloud service for parallel applications has been developed. It consists of several components and implements a flexible and modular architecture which allows to utilize both more applications and various types of resources as computational backends. An example of using the Cloud&HybriLIT resources in scientific computing is the study of superconducting processes in the stacked long Josephson junctions (LJJ). The LJJ systems have undergone intensive research because of the perspective of practical applications in nano-electronics and quantum computing. In this contribution we generalize the experience in application of the Cloud&HybriLIT resources for high performance computing of physical characteristics in the LJJ system.


2010 ◽  
Vol 85 (3-4) ◽  
pp. 383-387 ◽  
Author(s):  
P.I. Strand ◽  
R. Coelho ◽  
D. Coster ◽  
L.-G. Eriksson ◽  
F. Imbeaux ◽  
...  

2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Anton Umek ◽  
Anton Kos

This paper studies the main technological challenges of real-time biofeedback in sport. We identified communication and processing as two main possible obstacles for high performance real-time biofeedback systems. We give special attention to the role of high performance computing with some details on possible usage of DataFlow computing paradigm. Motion tracking systems, in connection with the biomechanical biofeedback, help in accelerating motor learning. Requirements about various parameters important in real-time biofeedback applications are discussed. Inertial sensor tracking system accuracy is tested in comparison with a high performance optical tracking system. Special focus is given on feedback loop delays. Real-time sensor signal acquisitions and real-time processing challenges, in connection with biomechanical biofeedback, are presented. Despite the fact that local processing requires less energy consumption than remote processing, many other limitations, most often the insufficient local processing power, can lead to distributed system as the only possible option. A multiuser signal processing in football match is recognised as an example for high performance application that needs high-speed communication and high performance remote computing. DataFlow computing is found as a good choice for real-time biofeedback systems with large data streams.


2021 ◽  
Author(s):  
Mohammad Reza Heidari ◽  
Zhaoyang Song ◽  
Enrico Degregori ◽  
Jörg Behrens ◽  
Hendryk Bockelmann

Abstract. The scalability of the atmospheric model ECHAM6 at low resolution, as used in palaeoclimate simulations, suffers from the limited number of grid points. As a consequence, the potential of current high performance computing architectures cannot be used at full scale for such experiments, particularly within the available domain-decomposition approach. Radiation calculations are a relatively expensive part of the atmospheric simulations taking approximately up to over 50 % of the total runtime. This current level of cost is achieved by calculating the radiative transfer only once in every two simulation hours. In response, we propose to extend the available concurrency within the model further by running the radiation component in parallel with other atmospheric processes to improve scalability and performance. This paper introduces the concurrent radiation scheme in ECHAM6 and presents a thorough analysis of its impact on the performance of the model. It also evaluates the scientific results from such simulations. Our experiments show that ECHAM6 can achieve a speedup over 1.9x using the concurrent radiation scheme. This empirical study serves as a successful example that can stimulate research on other concurrent components in atmospheric modeing whenever scalability becomes challenging.


Sign in / Sign up

Export Citation Format

Share Document