scholarly journals Processing conversion and parallel control platform: a parallel approach to serial hydrodynamic simulators for complex hydrodynamic simulations

2016 ◽  
Vol 18 (5) ◽  
pp. 851-866 ◽  
Author(s):  
Yizi Shang ◽  
Yanxiang Guo ◽  
Ling Shang ◽  
Yuntao Ye ◽  
Ronghua Liu ◽  
...  

In this paper, a processing conversion and parallel control platform (PCsP) is proposed for transitioning serial hydrodynamic simulators to a cluster-computing system. We previously undertook efforts to promote the research and development of this type of platform and to demonstrate and commercialize it. Our PCsP provide distributed and parallel patterns, a centralized architecture, and user support. To validate our employed methodology and highlight its simplicity, we adopted the technology in various applications based on multi-grid algorithms. The methodology was shown to be reliable and feasible across computational domains, partitioning strategies, and multi-grid codes. Furthermore, its effectiveness was demonstrated using a complex engineering case in addition to code based on slightly less complex mathematical models. Eventual transition to a cluster-computing system will require further investigation of the impact of different model combinations on calculation accuracy, efficiency of operating models, and PCsP functional development.

2018 ◽  
Vol 45 ◽  
pp. 00066 ◽  
Author(s):  
Kamil Pochwat

Designing retention facilities is a complex engineering process that requires the collection of the detailed hydrological data of a catchment and hydraulic sewerage system. The acquired data are necessary to prepare a model of the retention tank in appropriate software for hydrodynamic modelling. The article shows the results of tests concerning the analysis of the sensitivity of a sewerage model of a rainwater retention tank which may be implemented in this software. The results of tests allowed determining the impact of the individual hydraulic characteristics of the catchment and the sewerage system on the required retention capacity of a tank. A planned analysis is performed based on artificial neural networks and the required data are acquired by hydrodynamic simulations in SWMM 5.1.


2021 ◽  
Vol 54 (7) ◽  
pp. 1-35
Author(s):  
Salonik Resch ◽  
Ulya R. Karpuzcu

Benchmarking is how the performance of a computing system is determined. Surprisingly, even for classical computers this is not a straightforward process. One must choose the appropriate benchmark and metrics to extract meaningful results. Different benchmarks test the system in different ways, and each individual metric may or may not be of interest. Choosing the appropriate approach is tricky. The situation is even more open ended for quantum computers, where there is a wider range of hardware, fewer established guidelines, and additional complicating factors. Notably, quantum noise significantly impacts performance and is difficult to model accurately. Here, we discuss benchmarking of quantum computers from a computer architecture perspective and provide numerical simulations highlighting challenges that suggest caution.


2020 ◽  
Vol 500 (3) ◽  
pp. 3594-3612
Author(s):  
P F Rohde ◽  
S Walch ◽  
S D Clarke ◽  
D Seifried ◽  
A P Whitworth ◽  
...  

ABSTRACT The accretion of material on to young protostars is accompanied by the launching of outflows. Observations show that accretion, and therefore also outflows, are episodic. However, the effects of episodic outflow feedback on the core scale are not well understood. We have performed 88 smoothed particle hydrodynamic simulations of turbulent dense $1 \, {{\mathrm{M}}}_{\odot }$ cores to study the influence of episodic outflow feedback on the stellar multiplicity and the star formation efficiency (SFE). Protostars are represented by sink particles, which use a subgrid model to capture stellar evolution, inner-disc evolution, episodic accretion, and the launching of outflows. By comparing simulations with and without episodic outflow feedback, we show that simulations with outflow feedback reproduce the binary statistics of young stellar populations, including the relative proportions of singles, binaries, triples, etc. and the high incidence of twin binaries with q ≥ 0.95; simulations without outflow feedback do not. Entrainment factors (the ratio between total outflowing mass and initially ejected mass) are typically ∼7 ± 2, but can be much higher if the total mass of stars formed in a core is low and/or outflow episodes are infrequent. By decreasing both the mean mass of the stars formed and the number of stars formed, outflow feedback reduces the SFE by about a factor of 2 (as compared with simulations that do not include outflow feedback).


2020 ◽  
Vol 498 (3) ◽  
pp. 3870-3887
Author(s):  
G Musoke ◽  
A J Young ◽  
M Birkinshaw

ABSTRACT Numerical simulations play an essential role in helping us to understand the physical processes behind relativistic jets in active galactic nuclei. The large number of hydrodynamic codes available today enables a variety of different numerical algorithms to be utilized when conducting the simulations. Since many of the simulations presented in the literature use different combinations of algorithms it is important to quantify the differences in jet evolution that can arise due to the precise numerical schemes used. We conduct a series of simulations using the flash (magneto-)hydrodynamics code in which we vary the Riemann solver and spatial reconstruction schemes to determine their impact on the evolution and dynamics of the jets. For highly refined grids the variation in the simulation results introduced by the different combinations of spatial reconstruction scheme and Riemann solver is typically small. A high level of convergence is found for simulations using third-order spatial reconstruction with the Harten–Lax–Van-Leer with contact and Hybrid Riemann solvers.


1998 ◽  
Vol 1647 (1) ◽  
pp. 122-129 ◽  
Author(s):  
Mark B. Bateman ◽  
Ian C. Howard ◽  
Andrew R. Johnson ◽  
John M. Walton

The optimization of roadway safety design by experimental means is expensive and time consuming. Computer simulation of such complex engineering systems improves understanding of how and why the system behaves as it does, aids in decision making, and reduces development costs and time involved. The simulation presented is based on a computer model developed from a study of the results of full-scale experiments of impact on the Brifen wire-rope safety fence (WRSF). The code comprises a dynamic vehicle model and a quasi-static fence model interacting in time through the important collapse mechanisms of the system. The principles governing them are described and their inclusion is validated by demonstrating good correlation between the predictions of the simulation and the experimental test data. Sensitivity studies show that the performance of a WRSF is particularly sensitive to the impact conditions of vehicle speed and angle and the design parameters offence height, post spacing, post strength, and rope pre-tension. The sensitivity work is extended to show that for fences installed with a low rope pre-tension, performance may not be significantly impaired if rope pre-tension is not maintained. However, significant gains in fence performance may be made should a fence be installed and maintained with a high rope pre-tension. The use of the simulation in assessing cost-effectiveness of alternative designs in achieving a target performance is also demonstrated.


2013 ◽  
Vol 53 (A) ◽  
pp. 829-831
Author(s):  
Memmo Federici ◽  
Bruno L. Martino

Simulation of the interactions between particles and matter in studies for developing X-rays detectors generally requires very long calculation times (up to several days or weeks). These times are often a serious limitation for the success of the simulations and for the accuracy of the simulated models. One of the tools used by the scientific community to perform these simulations is Geant4 (Geometry And Tracking) [2, 3]. On the best of experience in the design of the AVES cluster computing system, Federici et al. [1], the IAPS (Istituto di Astrofisica e Planetologia Spaziali INAF) laboratories were able to develop a cluster computer system dedicated to Geant 4. The Cluster is easy to use and easily expandable, and thanks to the design criteria adopted it achieves an excellent compromise between performance and cost. The management software developed for the Cluster splits the single instance of simulation on the cores available, allowing the use of software written for serial computation to reach a computing speed similar to that obtainable from a native parallel software. The simulations carried out on the Cluster showed an increase in execution time by a factor of 20 to 60 compared to the times obtained with the use of a single PC of medium quality.


Author(s):  
Yao Wu ◽  
Long Zheng ◽  
Brian Heilig ◽  
Guang R Gao

As the attention given to big data grows, cluster computing systems for distributed processing of large data sets become the mainstream and critical requirement in high performance distributed system research. One of the most successful systems is Hadoop, which uses MapReduce as a programming/execution model and takes disks as intermedia to process huge volumes of data. Spark, as an in-memory computing engine, can solve the iterative and interactive problems more efficiently. However, currently it is a consensus that they are not the final solutions to big data due to a MapReduce-like programming model, synchronous execution model and the constraint that only supports batch processing, and so on. A new solution, especially, a fundamental evolution is needed to bring big data solutions into a new era. In this paper, we introduce a new cluster computing system called HAMR which supports both batch and streaming processing. To achieve better performance, HAMR integrates high performance computing approaches, i.e. dataflow fundamental into a big data solution. With more specifications, HAMR is fully designed based on in-memory computing to reduce the unnecessary disk access overhead; task scheduling and memory management are in fine-grain manner to explore more parallelism; asynchronous execution improves efficiency of computation resource usage, and also makes workload balance across the whole cluster better. The experimental results show that HAMR can outperform Hadoop MapReduce and Spark by up to 19x and 7x respectively, in the same cluster environment. Furthermore, HAMR can handle scaling data size well beyond the capabilities of Spark.


Author(s):  
Ashwini Patil ◽  
Ankit Shah ◽  
Sheetal Gaikwad ◽  
Akassh A. Mishra ◽  
Simranjit Singh Kohli ◽  
...  

2013 ◽  
Vol 22 (07) ◽  
pp. 1350057
Author(s):  
TIBOR SKALA ◽  
KAROLJ SKALA ◽  
ENIS AFGAN

The paper presents new approach to the analysis of the time required for rendering, depending on the complexity of the 3D object. Ray tracing is used to measure program complexity measure parameter based on the rendering time. Parametric analysis using ray tracing program is achieved by correlation between image parameters and rendering time. Electronic imaging and image processing (rendering) are considered in the 3D virtual photorealistic imaging design and electronic imaging creation and optimization. We are attempting to treat the complexity as a measure in the process of creating and rendering a complex image. The work is directed towards defining and optimizing the rendering time in the photorealistic 3D image creation process. The correlation between large numbers of image components obtained by the ray tracing program is used to measure the scene complexity regarding of render time. To discover which space variable to use as complexity measure, we cross-correlated variables output by ray tracing program, keeping track of the maximum values. Based on this method, the highest correlation was observed on variable Bounding box succeeded tests, and that way obtained the complexity measure for the image. That way the original research is performed in creating analytical (data-visual) methods for determining the image rendering parameters. The results based on experimental verification on a large number of performed standard tests on real sets of graphical contents, and established the impact of graphical content complexity to the rendering time at distributed cluster computing systems.


Sign in / Sign up

Export Citation Format

Share Document