scholarly journals Using high performance techniques to accelerate demand-driven hydraulic solvers

2012 ◽  
Vol 15 (1) ◽  
pp. 38-54 ◽  
Author(s):  
Michele Guidolin ◽  
Zoran Kapelan ◽  
Dragan Savić

Computer models of water distribution networks are commonly used to simulate large systems under complex dynamic scenarios. These models normally use so-called demand-driven solvers, which determine the nodal pressures and pipe flow rates that correspond to specified nodal demands. This paper investigates the use of data parallel high performance computing (HPC) techniques to accelerate demand-driven hydraulic solvers. The sequential code of the solver implemented in the CWSNet library is analysed to understand which computational blocks contribute the most to the total computation time of a hydraulic simulation. The results obtained show that, contrary to popular belief, the linear solver is not the code block with the highest impact on the simulation time, but the pipe head loss computation. Two data parallel HPC techniques, single instruction multiple data (SIMD) operations and general purpose computation on graphics processing units (GPGPU), are used to accelerate the pipe head loss computation and linear algebra operations in new implementations of the hydraulic solver of CWSNet library. The results obtained on different network models show that the use of this techniques can improve significantly the performance of a demand-driven hydraulic solver.

2010 ◽  
Vol 20 (04) ◽  
pp. 325-339 ◽  
Author(s):  
JYOTHISH SOMAN ◽  
KISHORE KOTHAPALLI ◽  
P J NARAYANAN

Graphics Processing Units (GPU) are application specific accelerators which provide high performance to cost ratio and are widely available and used, hence places them as a ubiquitous accelerator. A computing paradigm based on the same is the general purpose computing on the GPU (GPGPU) model. The GPU due to its graphics lineage is better suited for the data-parallel, data-regular algorithms. The hardware architecture of the GPU is not suitable for the data parallel but data irregular algorithms such as graph connected components and list ranking. In this paper, we present results that show how to use GPUs efficiently for graph algorithms which are known to have irregular data access patterns. We consider two fundamental graph problems: finding the connected components and finding a spanning tree. These two problems find applications in several graph theoretical problems. In this paper we arrive at efficient GPU implementations for the above two problems. The algorithms focus on minimising irregularity at both algorithmic and implementation level. Our implementation achieves a speedup of 11-16 times over a corresponding best sequential implementation.


2011 ◽  
Vol 28 (1) ◽  
pp. 1-14 ◽  
Author(s):  
W. van Straten ◽  
M. Bailes

Abstractdspsr is a high-performance, open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. Written primarily in C++, the library implements an extensive range of modular algorithms that can optionally exploit both multiple-core processors and general-purpose graphics processing units. After over a decade of research and development, dspsr is now stable and in widespread use in the community. This paper presents a detailed description of its functionality, justification of major design decisions, analysis of phase-coherent dispersion removal algorithms, and demonstration of performance on some contemporary microprocessor architectures.


Entropy ◽  
2018 ◽  
Vol 20 (8) ◽  
pp. 576 ◽  
Author(s):  
Do Yoo ◽  
Dong Chang ◽  
Yang Song ◽  
Jung Lee

This study proposed a pressure driven entropy method (PDEM) that determines a priority order of pressure gauge locations, which enables the impact of abnormal condition (e.g., pipe failures) to be quantitatively identified in water distribution networks (WDNs). The method developed utilizes the entropy method from information theory and pressure driven analysis (PDA), which is the latest hydraulic analysis method. The conventional hydraulic approach has problems in determining the locations of pressure gauges, attributable to unrealistic results under abnormal conditions (e.g., negative pressure). The proposed method was applied to two benchmark pipe networks and one real pipe network. The priority order for optimal locations was produced, and the result was compared to existing approach. The results of the conventional method show that the pressure reduction difference of each node became so excessive, which resulted in a distorted distribution. However, with the method developed, which considers the connectivity of a system and the influence among nodes based on PDA and entropy method results, pressure gauges can be more realistically and reasonably located.


2017 ◽  
Vol 10 (2) ◽  
pp. 93-98 ◽  
Author(s):  
Mathias Braun ◽  
Olivier Piller ◽  
Jochen Deuerlein ◽  
Iraj Mortazavi

Abstract. The calculation of hydraulic state variables for a network is an important task in managing the distribution of potable water. Over the years the mathematical modeling process has been improved by numerous researchers for utilization in new computer applications and the more realistic modeling of water distribution networks. But, in spite of these continuous advances, there are still a number of physical phenomena that may not be tackled correctly by current models. This paper will take a closer look at the two modeling paradigms given by demand- and pressure-driven modeling. The basic equations are introduced and parallels are drawn with the optimization formulations from electrical engineering. These formulations guarantee the existence and uniqueness of the solution. One of the central questions of the French and German research project ResiWater is the investigation of the network resilience in the case of extreme events or disasters. Under such extraordinary conditions where models are pushed beyond their limits, we talk about deficient network models. Examples of deficient networks are given by highly regulated flow, leakage or pipe bursts and cases where pressure falls below the vapor pressure of water. These examples will be presented and analyzed on the solvability and physical correctness of the solution with respect to demand- and pressure-driven models.


2014 ◽  
Vol 596 ◽  
pp. 276-279
Author(s):  
Xiao Hui Pan

Graph component labeling, which is a subset of the general graph coloring problem, is a computationally expensive operation in many important applications and simulations. A number of data-parallel algorithmic variations to the component labeling problem are possible and we explore their use with general purpose graphical processing units (GPGPUs) and with the CUDA GPU programming language. We discuss implementation issues and performance results on CPUs and GPUs using CUDA. We evaluated our system with real-world graphs. We show how to consider different architectural features of the GPU and the host CPUs and achieve high performance.


Author(s):  
Masaki Iwasawa ◽  
Daisuke Namekata ◽  
Keigo Nitadori ◽  
Kentaro Nomura ◽  
Long Wang ◽  
...  

Abstract We describe algorithms implemented in FDPS (Framework for Developing Particle Simulators) to make efficient use of accelerator hardware such as GPGPUs (general-purpose computing on graphics processing units). We have developed FDPS to make it possible for researchers to develop their own high-performance parallel particle-based simulation programs without spending large amounts of time on parallelization and performance tuning. FDPS provides a high-performance implementation of parallel algorithms for particle-based simulations in a “generic” form, so that researchers can define their own particle data structure and interparticle interaction functions. FDPS compiled with user-supplied data types and interaction functions provides all the necessary functions for parallelization, and researchers can thus write their programs as though they are writing simple non-parallel code. It has previously been possible to use accelerators with FDPS by writing an interaction function that uses the accelerator. However, the efficiency was limited by the latency and bandwidth of communication between the CPU and the accelerator, and also by the mismatch between the available degree of parallelism of the interaction function and that of the hardware parallelism. We have modified the interface of the user-provided interaction functions so that accelerators are more efficiently used. We also implemented new techniques which reduce the amount of work on the CPU side and the amount of communication between CPU and accelerators. We have measured the performance of N-body simulations on a system with an NVIDIA Volta GPGPU using FDPS and the achieved performance is around 27% of the theoretical peak limit. We have constructed a detailed performance model, and found that the current implementation can achieve good performance on systems with much smaller memory and communication bandwidth. Thus, our implementation will be applicable to future generations of accelerator system.


Author(s):  
Attila Bibok ◽  
Roland Fülöp

Pressure management is a widely adopted technique in the toolset of drinking water distribution system operators. It has multiple benefits, like reducing physical losses in pipe networks with excessive leakage, prolong the expected lifetime of the pipes and protecting home appliances from unacceptably high pressure. In some cases, even legislation compliance can be the motivation behind pressure management: It is mandatory to supply water at the customer’s connection between 1.5 and 6.0 bar in Hungary since 2011. Diaphragm pressure reducing valves are widespread in the drinking water distribution networks. Although, their sensitivity for gas pocket accumulation in the valve house makes hydraulic calibration of these pressure managed areas a challenging task for hydraulic modelers and network operators. This is especially true when more than one inlet is used to supply the same area in order to increase resilience and flow capacity.This paper investigates the hydraulic properties of pressure reduced areas with multiple inlet points. Model calibration using a single valve and minor loss was found insufficient because the additional pressure loss referenced to the pressure setting has a non-quadratic relationship with flow-rate on the discharge side under real-life circumstances. This phenomenon can be handled by using a PRV (pressure reducing valve) + GPV (general purpose valve) in series.


2011 ◽  
Vol 28 (1) ◽  
pp. 15-27 ◽  
Author(s):  
Christopher J. Fluke ◽  
David G. Barnes ◽  
Benjamin R. Barsdell ◽  
Amr H. Hassan

AbstractGeneral-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.


2014 ◽  
Vol 4 (3) ◽  
Author(s):  
Branislav Sobota ◽  
Štefan Korečko ◽  
Csaba Szabó ◽  
František Hrozek

AbstractRay tracing is one of computer graphics methods for achieving the most realistic outputs. Its main disadvantage is high computation demands. Removal of this disadvantage is possible using parallelization due to the fact that the ray tracing method is inherently parallel. Solution presented in this article uses GPGPU (general-purpose computing on graphics processing units) technology and a predictive evaluation for the acceleration of ray tracing method. The CUDA C was selected as a GPGPU language and it was used for a conversion of a raytracer core. The main reason for choosing this language was usage of the Tesla C1060 graphics card. The predictive evaluation of a scene was based on the fact that total computation time increases proportionally with resolution. This evaluation allows selection of the optimal scene division for the parallel ray tracing. In tests, proposed GPGPU solution reached accelerations up to 28.3× comparing to CPU.


Sign in / Sign up

Export Citation Format

Share Document