Computational methods of continuum mechanics for exaflop computer systems

Author(s):  
Алексей Владимирович Снытников ◽  
Галина Геннадьевна Лазарева

Рассмотрены вопросы использования экзафлопсных вычислений для решения прикладных задач. На основе обзора работ в этой области выделены наиболее актуальные вопросы, связанные с экзафлопсными вычислениями. Особое внимание уделено особенностям программного обеспечения, алгоритмам и численным методам для экзафлопсных суперЭВМ. Приведены примеры разработки новых и адаптации существующих алгоритмов и численных методов для решения задач механики сплошной среды. Сделан анализ наиболее популярных приложений The article deals with applied issues which arise when exascale computing are used to solve applied problems. Based on the review of works in this area, the most pressing issues related to exascale calculations are highlighted. Particular attention is paid to software features, algorithms and numerical methods for exaflop supercomputers. The requirements for such programs and algorithms are formulated. Based on the review of existing approaches related to achieving high performance, the main fundamentally different and non-overlapping directions for improving the performance of calculations are highlighted. The question of the necessity for criteria of applicability for computational algorithms for exaflop supercomputers is raised. Currently, the only criterion which is used, demands the absence of a significant drop in efficiency in the transition from a petaflop calculation to a ten-petaflop calculation. In the absence of the possibility of such calculations, simulation modelling can be carried out. Examples of development for new and adaptation of existing algorithms and numerical methods for solving problems of continuum mechanics are given. The fundamental difference between algorithms specially designed for exascale machines and algorithms adapted for exaflops is shown. The analysis of publications has showed that in the field of solving problems of continuum mechanics, the approach not associated with the development of new, but rather with the adaptation of existing numerical methods and algorithms to the architecture of exaflop supercomputers prevails. The analysis of the most popular applications is made. The most relevant application of exaflop supercomputers in this area is computational fluid dynamics. This is because hydrodynamic applications are rich and diverse field. The number of publications indicates that the involvement of high-performance computing now is available and in demand

2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Florin Pop

Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.


Author(s):  
G. Balarac ◽  
G. -H. Cottet ◽  
J. -M. Etancelin ◽  
J. -B. Lagaert ◽  
F. Perignon ◽  
...  

Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 72
Author(s):  
Muhammad Rizwan Riaz ◽  
Hiroki Motoyama ◽  
Muneo Hori

Recent achievement of research on soil-structure interaction (SSI) is reviewed, with a main focus on the numerical analysis. The review is based on the continuum mechanics theory and the use of high-performance computing (HPC) and clarifies the characteristics of a wide range of treatment of SSI from a simplified model to a high fidelity model. Emphasized is that all the treatment can be regarded as the result of the mathematical approximations in solving a physical continuum mechanics problem of a soil-structure system. The use of HPC is inevitable if we need to obtain a solution of higher accuracy and finer resolution. An example of using HPC for the analysis of SSI is presented.


Author(s):  
Steven A. Niederer ◽  
Eric Kerfoot ◽  
Alan P. Benson ◽  
Miguel O. Bernabeu ◽  
Olivier Bernus ◽  
...  

Ongoing developments in cardiac modelling have resulted, in particular, in the development of advanced and increasingly complex computational frameworks for simulating cardiac tissue electrophysiology. The goal of these simulations is often to represent the detailed physiology and pathologies of the heart using codes that exploit the computational potential of high-performance computing architectures. These developments have rapidly progressed the simulation capacity of cardiac virtual physiological human style models; however, they have also made it increasingly challenging to verify that a given code provides a faithful representation of the purported governing equations and corresponding solution techniques. This study provides the first cardiac tissue electrophysiology simulation benchmark to allow these codes to be verified. The benchmark was successfully evaluated on 11 simulation platforms to generate a consensus gold-standard converged solution. The benchmark definition in combination with the gold-standard solution can now be used to verify new simulation codes and numerical methods in the future.


Author(s):  
Mark H. Ellisman

The increased availability of High Performance Computing and Communications (HPCC) offers scientists and students the potential for effective remote interactive use of centralized, specialized, and expensive instrumentation and computers. Examples of instruments capable of remote operation that may be usefully controlled from a distance are increasing. Some in current use include telescopes, networks of remote geophysical sensing devices and more recently, the intermediate high voltage electron microscope developed at the San Diego Microscopy and Imaging Resource (SDMIR) in La Jolla. In this presentation the imaging capabilities of a specially designed JEOL 4000EX IVEM will be described. This instrument was developed mainly to facilitate the extraction of 3-dimensional information from thick sections. In addition, progress will be described on a project now underway to develop a more advanced version of the Telemicroscopy software we previously demonstrated as a tool to for providing remote access to this IVEM (Mercurio et al., 1992; Fan et al., 1992).


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document