High-performance computing simulations of self-gravity in astronomical agglomerates

SIMULATION ◽  
2021 ◽  
pp. 003754972199876
Author(s):  
Néstor Rocchetti ◽  
Sergio Nesmachnow ◽  
Gonzalo Tancredi

This article describes the advances in the design, implementation, and evaluation of efficient algorithms for self-gravity simulations in astronomical agglomerates. Three algorithms are presented and evaluated: the occupied cells method, and two variations of the Barnes–Hut method using an octal and a binary tree. Two scenarios are considered in the evaluation: two agglomerates orbiting each other and a collapsing cube. The results show that the proposed octal tree Barnes–Hut method allows improving the performance of the self-gravity calculation up to 100 times with respect to the occupied cells method, while having good numerical accuracy. The proposed algorithms are efficient and accurate methods for self-gravity simulations in astronomical agglomerates.

Computer ◽  
2014 ◽  
Vol 47 (9) ◽  
pp. 34-39 ◽  
Author(s):  
Daniel Frascarelli ◽  
Sergio Nesmachnow ◽  
Gonzalo Tancredi

10.29007/tfls ◽  
2018 ◽  
Author(s):  
Farah Benmouhoub ◽  
Nasrine Damouche ◽  
Matthieu Martel

In high performance computing, nearly all the implementations and published experiments use floating-point arithmetic. However, since floating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors. These round-off errors may cause damages whose gravity varies depending on the critical level of the application. To deal with this issue, we have developed a tool which im- proves the numerical accuracy of computations by automatically transforming programs in a source-to-source manner. Our transformation, relies on static analysis by abstract interpretation and operates on pieces of code with assignments, conditionals, loops, functions and arrays. In this article, we apply our techniques to optimize a parallel program representative of the high performance computing domain. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. We are also interested in studying the compromise between execution time and numerical accuracy.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document