Capacity Expansion in Stochastic Flow Networks

1992 ◽  
Vol 6 (1) ◽  
pp. 99-118 ◽  
Author(s):  
Christos Alexopoulos ◽  
George S. Fishman

Sensitivity analysis represents an important aspect of network flow design problems. For example, what is the incremental increase in system flow of increasing the diameters of specified pipes in a water distribution network? Although methods exist for solving this problem in the deterministic case, no comparable methodology has been available when the network's arc capacities are subject to random variation. This paper provides this methodology by describing a Monte Carlo sampling plan that allows one to conduct a sensitivity analysis for a variable upper bound on the flow capacity of a specified arc. The proposed plan has two notable features. It permits estimation of the probabilities of a feasible flow for many values of the upper bound on the arc capacity from a single data set generated by the Monte Carlo method at a single value of this upper bound. Also, the resulting estimators have considerably smaller variancesthan crude Monte Carlo sampling would produce in the same setting. The success of the technique follows from the use of lower and upper bounds on each probability of interest where the bounds are generated from an established method of decomposing the capacity state space.

2005 ◽  
Vol 5 (2) ◽  
pp. 31-38
Author(s):  
A. Asakura ◽  
A. Koizumi ◽  
O. Odanagi ◽  
H. Watanabe ◽  
T. Inakazu

In Japan most of the water distribution networks were constructed during the 1960s to 1970s. Since these pipelines were used for a long period, pipeline rehabilitation is necessary to maintain water supply. Although investment for pipeline rehabilitation has to be planned in terms of cost-effectiveness, no standard method has been established because pipelines were replaced on emergency and ad hoc basis in the past. In this paper, a method to determine the maintenance of the water supply on an optimal basis with a fixed budget for a water distribution network is proposed. Firstly, a method to quantify the benefits of pipeline rehabilitation is examined. Secondly, two models using Integer Programming and Monte Carlo simulation to maximize the benefits of pipeline rehabilitation with limited budget were considered, and they are applied to a model case and a case study. Based on these studies, it is concluded that the Monte Carlo simulation model to calculate the appropriate investment for the pipeline rehabilitation planning is both convenient and practical.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16 ◽  
Author(s):  
Jonatan Zischg ◽  
Christopher Klinkhamer ◽  
Xianyuan Zhan ◽  
P. Suresh C. Rao ◽  
Robert Sitzenfrei

In this paper, we used complex network analysis approaches to investigate topological coevolution over a century for three different urban infrastructure networks. We applied network analyses to a unique time-stamped network data set of an Alpine case study, representing the historical development of the town and its infrastructure over the past 108 years. The analyzed infrastructure includes the water distribution network (WDN), the urban drainage network (UDN), and the road network (RN). We use the dual representation of the network by using the Hierarchical Intersection Continuity Negotiation (HICN) approach, with pipes or roads as nodes and their intersections as edges. The functional topologies of the networks are analyzed based on the dual graphs, providing insights beyond a conventional graph (primal mapping) analysis. We observe that the RN, WDN, and UDN all exhibit heavy tailed node degree distributions [P(k)] with high dispersion around the mean. In 50 percent of the investigated networks, P(k) can be approximated with truncated [Pareto] power-law functions, as they are known for scale-free networks. Structural differences between the three evolving network types resulting from different functionalities and system states are reflected in the P(k) and other complex network metrics. Small-world tendencies are identified by comparing the networks with their random and regular lattice network equivalents. Furthermore, we show the remapping of the dual network characteristics to the spatial map and the identification of criticalities among different network types through co-location analysis and discuss possibilities for further applications.


2016 ◽  
Vol 72 (7) ◽  
pp. III_457-III_465
Author(s):  
Takaharu KUNIZANE ◽  
Toyono INAKAZU ◽  
Akira KOIZUMI ◽  
Yasuhiro ARAI ◽  
Kiyokazu SATO ◽  
...  

Author(s):  
A. T. Ernst

AbstractThis paper deals with a class of network optimization problems in which the flow is a function of time rather than static as in the classical network flow problem, and storage is permitted at the nodes. A solution method involving discretization will be presented as an application of the ASG algorithm. We furnish a proof that the discretized solution converges to the exact continuous solution. We also apply the method to a water distribution network where we minimize the cost of pumping water to meet supply and demand, subject to both linear and nonlinear constraints.


2016 ◽  
Vol 73 (2) ◽  
pp. 709-728 ◽  
Author(s):  
Ivy Tan ◽  
Trude Storelvmo

Abstract The influence of six CAM5.1 cloud microphysical parameters on the variance of phase partitioning in mixed-phase clouds is determined by application of a variance-based sensitivity analysis. The sensitivity analysis is based on a generalized linear model that assumes a polynomial relationship between the six parameters and the two-way interactions between them. The parameters, bounded such that they yield realistic cloud phase values, were selected by adopting a quasi–Monte Carlo sampling approach. The sensitivity analysis is applied globally, and to 20°-latitude-wide bands, and over the Southern Ocean at various mixed-phase cloud isotherms and reveals that the Wegener–Bergeron–Findeisen (WBF) time scale for the growth of ice crystals single-handedly accounts for the vast majority of the variance in cloud phase partitioning in mixed-phase clouds, while its interaction with the WBF time scale for the growth of snowflakes plays a secondary role. The fraction of dust aerosols active as ice nuclei in latitude bands, and the parameter related to the ice crystal fall speed and their interactions with the WBF time scale for ice are also significant. All other investigated parameters and their interactions with each other are negligible (<3%). Further analysis comparing three of the quasi–Monte Carlo–sampled simulations with spaceborne lidar observations by CALIOP suggests that the WBF process in CAM5.1 is currently parameterized such that it occurs too rapidly due to failure to account for subgrid-scale variability of liquid and ice partitioning in mixed-phase clouds.


2021 ◽  
Author(s):  
Y.C. Huang ◽  
W.L. Yang

Abstract This letter presents a novel approach for efficient deployment of top pressure sensors in water distribution network. Flow-Tracking analysis using head loss coverage ratio explores a least number of top sensors in network topologies. The following sequence of top sensor plans can be effortlessly determined by simple greedy algorithm. A regular hydraulic model with 33 sensor nodes is to validate the fast and effective feature of flow-tracking method. A top set of 5 sensor nodes selected by head loss coverage ratio Hcr in flow-tracking analysis agree exactly with top set of 5 sensitive nodes selected by objective function f(Xk) by means of Sensitivity Analysis. A linear relationship between objective function f(Xk) and heads loss coverage ratio Hcr of top sensor nodes reveals high accuracy mapping from flow-tracking method to Sensitivity Analysis. Time complexity of searching top sensors node set by flow-tracking analysis is O(m⋅n). Average pressure error can be expected as low as 0.08 m with top-two sensors in sensors layout. As top sensors in deployment plan are all used, minimum error of 0.04 m is achieved. Flow-Tracking analysis has the advantages of little time complexity and accurate top sensors strategy as a new efficient solution for pressure sensors deployment in associated flow network.


1991 ◽  
Vol 5 (2) ◽  
pp. 185-213 ◽  
Author(s):  
George S. Fishman

Sensitivity analysis is an integral part of virtually every study of system reliability. This paper describes a Monte Carlo sampling plan for estimating this sensitivity in system reliability to changes in component reliabilities. The unique feature of the approach is that sample data collected on K independent replications using a specified component reliability vector p are transformed by an importance function into unbiased estimates of system reliability for each component reliability vector q in a set of vectors Q. Moreover, this importance function together with available prior information about the given system enables one to produce estimates that require considerably less computing time to achieve a specified accuracy for all |Q| reliability estimates than a set of |Q| crude Monte Carlo sampling experiments would require to estimate each of the |Q| system reliabilities separately. As the number of components in the system grows, the relative efficiency continues to favor the proposed method.The paper shows the intimate relationship between the proposal and the method of control variates. Next, it relates the proposal to the estimation of coefficients in a reliability polynomial and indicates how this concept can be used to improve computing efficiency in certain cases. It also describes a procedure that determines the p vector, to be used in the sampling experiment, that minimizes a bound on the worst-case variance. The paper also derives individual and simultaneous confidence intervals that hold for every fixed sample size K. An example illustrates how the proposal works in an s-t connectedness problem.


Sign in / Sign up

Export Citation Format

Share Document