scholarly journals Simframe: A Python Framework for Scientific Simulations

2022 ◽  
Vol 7 (69) ◽  
pp. 3882
Author(s):  
Sebastian Stammler ◽  
Tilman Birnstiel
Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Author(s):  
Adrian Jackson ◽  
Michèle Weiland

This chapter describes experiences using Cloud infrastructures for scientific computing, both for serial and parallel computing. Amazon’s High Performance Computing (HPC) Cloud computing resources were compared to traditional HPC resources to quantify performance as well as assessing the complexity and cost of using the Cloud. Furthermore, a shared Cloud infrastructure is compared to standard desktop resources for scientific simulations. Whilst this is only a small scale evaluation these Cloud offerings, it does allow some conclusions to be drawn, particularly that the Cloud can currently not match the parallel performance of dedicated HPC machines for large scale parallel programs but can match the serial performance of standard computing resources for serial and small scale parallel programs. Also, the shared Cloud infrastructure cannot match dedicated computing resources for low level benchmarks, although for an actual scientific code, performance is comparable.


2020 ◽  
Vol 12 (12) ◽  
pp. 5059
Author(s):  
Xinzheng Lu ◽  
Donglian Gu ◽  
Zhen Xu ◽  
Chen Xiong ◽  
Yuan Tian

To improve the ability to prepare for and adapt to potential hazards in a city, efforts are being invested in evaluating the performance of the built environment under multiple hazard conditions. An integrated physics-based multi-hazard simulation framework covering both individual buildings and urban areas can help improve analysis efficiency and is significant for urban planning and emergency management activities. Therefore, a city information model-powered multi-hazard simulation framework is proposed considering three types of hazards (i.e., earthquake, fire, and wind hazards). The proposed framework consists of three modules: (1) data transformation, (2) physics-based hazard analysis, and (3) high-fidelity visualization. Three advantages are highlighted: (1) the database with multi-scale models is capable of meeting the various demands of stakeholders, (2) hazard analyses are all based on physics-based models, leading to rational and scientific simulations, and (3) high-fidelity visualization can help non-professional users better understand the disaster scenario. A case study of the Tsinghua University campus is performed. The results indicate the proposed framework is a practical method for multi-hazard simulations of both individual buildings and urban areas and has great potential in helping stakeholders to assess and recognize the risks faced by important buildings or the whole city.


1995 ◽  
Vol 4 (2) ◽  
pp. 121-129 ◽  
Author(s):  
Trina M. Roy ◽  
Carolina Cruz-Neira ◽  
Thomas A. DeFanti

Developing graphic interfaces to steer high-performance scientific computations has been a research subject in recent years. Now, computational scientists are starting to use virtual reality environments to explore the results of their simulations. In most cases, the virtual reality environment acts on precomputed data; however, the use of virtual reality environments for the dynamic steering of distributed scientific simulations is a growing area of research. We present in this paper the initial design and implementation of a distributed system that uses our virtual reality environment, the CAVE, to control and steer scientific simulations being computed on remote supercomputers. We discuss some of the more relevant features of virtual reality interfaces, emphasizing those of the CAVE, describe the distributed system developed, and present a scientific application, the Cosmic Worm, that makes extensive use of the distributed system.


Sign in / Sign up

Export Citation Format

Share Document