scholarly journals Improving computational efficiency of GLUE method for hydrological model uncertainty and parameter estimation using CPU-GPU hybrid high performance computer cluster

2021 ◽  
Author(s):  
Depeng Zuo ◽  
Guangyuan Kan ◽  
Hongquan Sun ◽  
Hongbin Zhang ◽  
Ke Liang

Abstract. The Generalized Likelihood Uncertainty Estimation (GLUE) method has been thrived for decades, huge number of applications in the field of hydrological model have proved its effectiveness in uncertainty and parameter estimation. However, for many years, the poor computational efficiency of GLUE hampers its further applications. A feasible way to solve this problem is the integration of modern CPU-GPU hybrid high performance computer cluster technology to accelerate the traditional GLUE method. In this study, we developed a CPU-GPU hybrid computer cluster-based highly parallel large-scale GLUE method to improve its computational efficiency. The Intel Xeon multi-core CPU and NVIDIA Tesla many-core GPU were adopted in this study. The source code was developed by using the MPICH2, C++ with OpenMP 2.0, and CUDA 6.5. The parallel GLUE method was tested by a widely-used hydrological model (the Xinanjiang model) to conduct performance and scalability investigation. Comparison results indicated that the parallel GLUE method outperformed the traditional serial method and have good application prospect on super computer clusters such as the ORNL Summit and Sierra of the TOP500 super computers around the world.

2020 ◽  
Author(s):  
somayeh shadkam ◽  
Mehedi Hasan ◽  
Christoph Niemann ◽  
Andreas Guenter ◽  
Petra Döll

<p>In this research we evaluated the WaterGAP Global Hydrological Model (WGHM) parameter uncertainties and predictive intervals for multi-type variables, including streamflow, total water storage anomaly (TWSA) and snow cover based on the Generalized Likelihood Uncertainty Estimation (GLUE) method, for a large river basin in North America, the Mississippi basin. The GLUE approach is built on Monte Carlo concept, in which simulations are performed for all the parameter sets. The parameter sets are sampled from a prior range of the parameters using the Latin Hypercube Sampling. The Nash-Sutcliffe efficiency was used as likelihood measure in case of all variables. The behavioral set of models were selected as those which result likelihood measures above the pre-specified thresholds for all three variables or subsets. These behavioral parameters set were used to analyze different parameters uncertainties, trade-offs among the variables, and the influence of each individual observation data on constraining other variables.</p>


2019 ◽  
Vol 7 (1) ◽  
pp. 55-70
Author(s):  
Moh. Zikky ◽  
M. Jainal Arifin ◽  
Kholid Fathoni ◽  
Agus Zainal Arifin

High-Performance Computer (HPC) is computer systems that are built to be able to solve computational loads. HPC can provide a high-performance technology and short the computing processes timing. This technology was often used in large-scale industries and several activities that require high-level computing, such as rendering virtual reality technology. In this research, we provide Tawaf’s Virtual Reality with 1000 of Pilgrims and realistic surroundings of Masjidil-Haram as the interactive and immersive simulation technology by imitating them with 3D models. Thus, the main purpose of this study is to calculate and to understand the processing time of its Virtual Reality with the implementation of tawaf activities using various platforms; such as computer and Android smartphone. The results showed that the outer-line or outer rotation of Kaa’bah mostly consumes minimum times although he must pass the longer distance than the closer one.  It happened because the agent with the closer area to Kaabah is facing the crowded peoples. It means an obstacle has the more impact than the distances in this case.


2016 ◽  
Vol 685 ◽  
pp. 943-947 ◽  
Author(s):  
Roman Mesheryakov ◽  
Alexander Moiseev ◽  
Anton Demin ◽  
Vadim Dorofeev ◽  
Vasily Sorokin

The paper is devoted to the simulation of queueing networks on high performance computer clusters. The objective is to develop a mathematical model of queueing network and simulation approach to the modelling of the general network functionality, as well as to provide a software implementation on a high-performance computer cluster. The simulation is based on a discrete-event approach, object oriented programming, and MPI technology. The model of the queueing networks simulation system was developed as an application that allows a user to simulate networks of rather free configuration. The experiments on a high performance computer cluster emphasize the high efficiency of parallel computing.


2018 ◽  
Vol 16 ◽  
pp. 02002
Author(s):  
Peter Weisenpacher ◽  
Jan Glasa ◽  
Lukas Valasek

Jet fan ventilation strategy in case of fire in bi-directional road tunnels is focused on maintaining smoke stratification. There are several factors influencing stratification under specific conditions. In this paper smoke movement during a 5 MW fire in a 600 m long road tunnel is studied by computer simulation and the influence of slope and external temperature on smoke stratification is analysed. Calculations were performed on a high performance computer cluster using parallel version of Fire Dynamics Simulator. Smoke stratification upstream of the fire is maintained in every simulation scenario with the exception of declivous tunnel, in which buoyancy intensifies backlayering. The behaviour of the smoke movement downstream of the fire is more complex. In the case of horizontal tunnel the stratification is not maintained in the vicinity of the fire and region with untenable conditions expands downstream. In the tunnel with slope of -2° this expansion is accelerated, while in the tunnel with slope of 2° untenable conditions spread in opposite direction. The influence of exterior temperature higher than temperature inside the tunnel is relatively weak in horizontal tunnels; however, it becomes very important in sloping tunnels, especially downstream of the fire.


2014 ◽  
Vol 22 (2) ◽  
pp. 59-74 ◽  
Author(s):  
Alex D. Breslow ◽  
Ananta Tiwari ◽  
Martin Schulz ◽  
Laura Carrington ◽  
Lingjia Tang ◽  
...  

Co-location, where multiple jobs share compute nodes in large-scale HPC systems, has been shown to increase aggregate throughput and energy efficiency by 10–20%. However, system operators disallow co-location due to fair-pricing concerns, i.e., a pricing mechanism that considers performance interference from co-running jobs. In the current pricing model, application execution time determines the price, which results in unfair prices paid by the minority of users whose jobs suffer from co-location. This paper presents POPPA, a runtime system that enables fair pricing by delivering precise online interference detection and facilitates the adoption of supercomputers with co-locations. POPPA leverages a novel shutter mechanism – a cyclic, fine-grained interference sampling mechanism to accurately deduce the interference between co-runners – to provide unbiased pricing of jobs that share nodes. POPPA is able to quantify inter-application interference within 4% mean absolute error on a variety of co-located benchmark and real scientific workloads.


2011 ◽  
Vol 291-294 ◽  
pp. 3044-3049
Author(s):  
Hong Bo Liang ◽  
Yi Ping Yao ◽  
Xiao Dong Mu

High performance simulation has great prospect of application in the fields of Materials Science and Engineering. In high performance simulation, high performance computers are used to improve the performance of simulation. As one of the simulation standards, HLA simulation was greatly applied in computer simulation. In HLA simulation domain, many RTIs are designed to support the simulation in LAN/WAN environment. Because of the general TCP/UDP communication mechanism, high simulation performance can’t be achieved by these software on high performance computer. To improve the simulation performance, a customized RTI software for high performance computer and PC hybrid environment is designed. By using of partially hierarchical design on functional distributed architecture, large scale simulation can be supported. An adaptive communication mechanism is proposed, which can adapt communication between different RTI components to shared memory, Infiniband and Ethernet automatically, thus can greatly improve communication performance. In addition, this paper explains the related design in this customized RTI.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012100
Author(s):  
P Weisenpacher ◽  
J Glasa ◽  
L Valasek ◽  
T Kubisova

Abstract This paper investigates smoke movement and its stratification in a lay-by of a 900 m long road tunnel by computer simulation using Fire Dynamics Simulator. The lay-by is located upstream of the fire in its vicinity. The influence of lay-by geometry on smoke spread is evaluated by comparison with a fictional tunnel without lay-by. Several fire scenarios with various tunnel slopes and heat release rates of fire in the tunnels without and with the lay-by are considered. The most significant breaking of smoke stratification and decrease of visibility in the area of the lay-by can be observed in the case of zero slope tunnel for more intensive fires with significant length of backlayering. Several other features of smoke spread in the lay-by are analysed as well. The parallel calculations were performed on a high-performance computer cluster.


Sign in / Sign up

Export Citation Format

Share Document