scientific simulations
Recently Published Documents


TOTAL DOCUMENTS

93
(FIVE YEARS 21)

H-INDEX

11
(FIVE YEARS 1)

2022 ◽  
Vol 7 (69) ◽  
pp. 3882
Author(s):  
Sebastian Stammler ◽  
Tilman Birnstiel

Author(s):  
Muhammad Firmansyah Kasim ◽  
D. Watson-Parris ◽  
L. Deaconu ◽  
S. Oliver ◽  
P. Hatfield ◽  
...  

Abstract Computer simulations are invaluable tools for scientific discovery. However, accurate simulations are often slow to execute, which limits their applicability to extensive parameter exploration, large-scale data analysis, and uncertainty quantification. A promising route to accelerate simulations by building fast emulators with machine learning requires large training datasets, which can be prohibitively expensive to obtain with slow simulations. Here we present a method based on neural architecture search to build accurate emulators even with a limited number of training data. The method successfully emulates simulations in 10 scientific cases including astrophysics, climate sci-ence, biogeochemistry, high energy density physics, fusion energy, and seismology, using the same super-architecture, algorithm, and hyperparameters. Our approach also inherently provides emulator uncertainty estimation, adding further confidence in their use. We anticipate this work will accelerate research involving expensive simulations, allow more extensive parameters exploration, and enable new, previously unfeasible computational discovery.


Scientific simulations require both increasing computing and storage power, parallel and distributed computing are strongly recommended to deal with them. Despite the power of the parallel approach to solving complex simulations, it is more difficult and error-prone, therefore, several factors create significant programming challenges and have the effect of reducing performance, which must be carefully managed to achieve high parallelism. This paper aims, on the one hand, to evaluate the performance of parallelism based on the graph partitioning approach, and on the other hand, to propose a performance analysis methodology that aims to help scientists to investigate the impact of inter-processor communication on the performance of an application First, a multilevel k way algorithms are used to compute a distribution of the calculations and associated data of simulation. Second, a performance evaluation strategy is used to investigate the efficiency of parallelism. Finally, we examine the results of the experiment and explain the perspective of this study.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Author(s):  
D. Groen ◽  
H. Arabnejad ◽  
V. Jancauskas ◽  
W. N. Edeling ◽  
F. Jansson ◽  
...  

We present the VECMA toolkit (VECMAtk), a flexible software environment for single and multiscale simulations that introduces directly applicable and reusable procedures for verification, validation (V&V), sensitivity analysis (SA) and uncertainty quantication (UQ). It enables users to verify key aspects of their applications, systematically compare and validate the simulation outputs against observational or benchmark data, and run simulations conveniently on any platform from the desktop to current multi-petascale computers. In this sequel to our paper on VECMAtk which we presented last year [ 1 ] we focus on a range of functional and performance improvements that we have introduced, cover newly introduced components, and applications examples from seven different domains such as conflict modelling and environmental sciences. We also present several implemented patterns for UQ/SA and V&V, and guide the reader through one example concerning COVID-19 modelling in detail. This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.


Sign in / Sign up

Export Citation Format

Share Document