scholarly journals Integration and Performance of New Technologies in the CMS Simulation

2020 ◽  
Vol 245 ◽  
pp. 02020
Author(s):  
Kevin Pedro

The HL-LHC and the corresponding detector upgrades for the CMS experiment will present extreme challenges for the full simulation. In particular, increased precision in models of physics processes may be required for accurate reproduction of particle shower measurements from the upcoming High Granularity Calorimeter. The CPU performance impacts of several proposed physics models will be discussed. There are several ongoing research and development efforts to make efficient use of new computing architectures and high performance computing systems for simulation. The integration of these new R&D products in the CMS software framework and corresponding CPU performance improvements will be presented.

2017 ◽  
Vol 108 ◽  
pp. 495-504 ◽  
Author(s):  
Jack Dongarra ◽  
Sven Hammarling ◽  
Nicholas J. Higham ◽  
Samuel D. Relton ◽  
Pedro Valero-Lara ◽  
...  

Author(s):  
Masahiro Nakao ◽  
Hitoshi Murai ◽  
Hidetoshi Iwashita ◽  
Taisuke Boku ◽  
Mitsuhisa Sato

To improve productivity for developing parallel applications on high performance computing systems, the XcalableMP PGAS language has been proposed. XcalableMP supports both a typical parallelization under the “global-view memory model” which uses directives and a flexible parallelization under the “local-view memory model” which uses coarray features. The goal of the present paper is to clarify XcalableMP’s productivity and performance. To do so, we implement and evaluate the high performance computing challenge benchmark, namely, EP STREAM Triad, High Performance Linpack, Global fast Fourier transform, and RandomAccess on the K computer using up to 16,384 compute nodes and a generic cluster system using up to 128 compute nodes. We found that we could more easily implement the benchmarks using XcalableMP rather than using MPI. Moreover, most of the performance results using XcalableMP were almost the same as those using MPI.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


2021 ◽  
Vol 2 (1) ◽  
pp. 46-62
Author(s):  
Santiago Iglesias-Baniela ◽  
Juan Vinagre-Ríos ◽  
José M. Pérez-Canosa

It is a well-known fact that the 1989 Exxon Valdez disaster caused the escort towing of laden tankers in many coastal areas of the world to become compulsory. In order to implement a new type of escort towing, specially designed to be employed in very adverse weather conditions, considerable changes in the hull form of escort tugs had to be made to improve their stability and performance. Since traditional winch and ropes technologies were only effective in calm waters, tugs had to be fitted with new devices. These improvements allowed the remodeled tugs to counterbalance the strong forces generated by the maneuvers in open waters. The aim of this paper is to perform a comprehensive literature review of the new high-performance automatic dynamic winches. Furthermore, a thorough analysis of the best available technologies regarding towline, essential to properly exploit the new winches, will be carried out. Through this review, the way in which the escort towing industry has faced this technological challenge is shown.


Author(s):  
Nikolay Kondratyuk ◽  
Vsevolod Nikolskiy ◽  
Daniil Pavlov ◽  
Vladimir Stegailov

Classical molecular dynamics (MD) calculations represent a significant part of the utilization time of high-performance computing systems. As usual, the efficiency of such calculations is based on an interplay of software and hardware that are nowadays moving to hybrid GPU-based technologies. Several well-developed open-source MD codes focused on GPUs differ both in their data management capabilities and in performance. In this work, we analyze the performance of LAMMPS, GROMACS and OpenMM MD packages with different GPU backends on Nvidia Volta and AMD Vega20 GPUs. We consider the efficiency of solving two identical MD models (generic for material science and biomolecular studies) using different software and hardware combinations. We describe our experience in porting the CUDA backend of LAMMPS to ROCm HIP that shows considerable benefits for AMD GPUs comparatively to the OpenCL backend.


Sign in / Sign up

Export Citation Format

Share Document