scholarly journals A fast and simple algorithm for calculating flow accumulation matrices from raster digital elevation models

2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Guiyun Zhou ◽  
Wenyan Dong ◽  
Hongqiang Wei

<p><strong>Abstract.</strong> Flow accumulation is an essential input for many hydrological and topographic analyses such as stream channel extraction, stream channel ordering and sub-watershed delineation. Flow accumulation matrices can be derived directly from DEMs and general have O(NlogN) time complexity (Arge, 2003; Bai et al., 2015). It is more common to derive the flow accumulation matrix from a flow direction matrix. This study focuses on calculating the flow accumulation matrix from the flow direction matrix that is derived using the single-flow D8 method (Barnes et al., 2014; Garbrecht &amp; Martz, 1997; Nardi et al., 2008; O'Callaghan &amp; Mark, 1984). In this study, we find give an overview of algorithms for flow accumulation calculation that have O(N) time complexity. These algorithms include algorithms are based on the concept of the number of input drainage paths (Wang et al.2011, Jiang et al. 2013), the algorithm based on the basin tree indices (Su et al. 2015), and the recursive algorithm (Choi, 2012; Freeman, 1991).</p><p>We propose a fast and simple algorithm to calculate the flow accumulation matrix. Compared with the existing algorithms that have O(N) time complexity, our algorithm runs faster and generally requires less memory. Our algorithm is also simple to implement. In our algorithm, we define three types of cells within a flow direction matrix: source cells, interior cells and intersection cells. A source cell does not have neighboring cells that drain to it and its NIDP value is zero. An interior cell has only one neighboring cell that drains to it and its NIDP value is one. An intersection cell has more than one neighboring cell that drains to it and its NIDP value is greater than one. The proposed algorithm initializes the flow accumulation matrix with the value of one. Our algorithm first calculates the NIDP matrix from the flow direction matrix. The algorithm then traverses each cell within the flow direction matrix row by row and column by column, similar to the traversal algorithm. When a source cell <i>c</i> is encountered, the algorithm traces all downstream cells of <i>c</i> until it encounters an intersection cell <i>i</i>. During the tracing, the accumulation value of a cell is added to the accumulation value of its immediate downstream cell. An interior cell has only one neighboring cell that drains to it and its final accumulation value is obtained when the tracing is done. The accumulation value of the intersection cell i is updated from this drainage path. However, cell <i>i</i> has other unvisited neighboring cells that drain to it and its final accumulation value cannot be obtained after this round of tracing. The algorithm decreases the NIDP value of <i>i</i> by one. Cell <i>i</i> is visited again when other drainage paths that pass through it are traced. When all of the drainage paths that pass through it are traced, cell <i>i</i> is treated as an interior cell and the final accumulation value of <i>i</i> is obtained correctly and the last tracing process can continue the tracing after cell <i>i</i> is treated as an interior cell. A worked example of the proposed algorithm is shown in Figure 1.</p><p>The five flow accumulation algorithms with O(N) time complexity, including Wang’s algorithm, Jiang’s algorithm, the BTI-based algorithm, the recursive algorithm and our proposed algorithm, are implemented in C++. The 3-m LiDAR-based DEMs of thirty counties in the state of Minnesota, USA, are downloaded from the FTP site operated by the Minnesota Geospatial Information Office. The first 30 counties in Minnesota in alphabetic order are chosen for the experiments to avoid selection bias. We use the algorithm proposed by Wang and Liu (2006) to fill the depressions and derive the flow direction matrices for all tested counties. The running times on the Windows system are listed in Figure 2.The average running times per 100 million cells are 14.42 seconds for Wang’s algorithm, 15.90 seconds for Jiang’salgorithm, 18.95 seconds for the BTI-based algorithm, 10.87 seconds for the recursive algorithm, and 5.26 seconds forour proposed algorithm. Our algorithm runs the fastest for all tested DEM. The speed-up ratios of our proposedalgorithm over the second fastest algorithm is about 51%.</p>

2015 ◽  
Vol 02 (04) ◽  
pp. 1550047
Author(s):  
Dennis G. Llemit

An alternative and simple algorithm for valuating the price of discrete barrier options is presented. This algorithm computes the price just exactly the same as the Cox–Ross–Rubinstein (CRR) model. As opposed to other pricing methodologies, this recursive algorithm utilizes only the terminal nodes of the binomial tree and it captures the intrinsic property, the knock-in or knock-out feature, of barrier options. In this paper, we apply the algorithm to compute the price of an Up and Out Put (UOP) barrier option and compare the results obtained from the CRR model. We then determine the time complexity of the algorithm and show that it is [Formula: see text].


2015 ◽  
Vol 6 (1) ◽  
pp. 35-46 ◽  
Author(s):  
Yong Wang

Traveling salesman problem (TSP) is a classic combinatorial optimization problem. The time complexity of the exact algorithms is generally an exponential function of the scale of TSP. This work gives an approximate algorithm with a four-vertex-three-line inequality for the triangle TSP. The time complexity is O(n2) and it can generate an approximation less than 2 times of the optimal solution. The paper designs a simple algorithm with the inequality. The algorithm is compared with the double-nearest neighbor algorithm. The experimental results illustrate the algorithm find the better approximations than the double-nearest neighbor algorithm for most TSP instances.


2021 ◽  
Author(s):  
Enrico Bonanno ◽  
Günter Blöschl ◽  
Julian Klaus

&lt;p&gt;Groundwater dynamics and flow directions in the near-stream zone depend on groundwater gradients, are highly dynamic in space and time, and reflect the flowpaths between stream channel and groundwater. A wide variety of studies have addressed groundwater flow and changes of flow direction in the near-stream domain which, however, have obtained contrasting results on the drivers and hydrologic conditions of water exchange between stream channel and near-stream groundwater. Here, we investigate groundwater dynamics and flow direction in the stream corridor through a spatially dense groundwater monitoring network over a period of 18 months, addressing the following research questions:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;How and why does groundwater table response vary between precipitation events across different hydrological states in the near-stream domain?&lt;/li&gt; &lt;li&gt;How and why does groundwater flow direction in the near-stream domain change across different hydrological conditions?&lt;/li&gt; &lt;/ul&gt;&lt;p&gt;Our results show a large spatio-temporal variability in groundwater table dynamics. During the progression from dry to wet hydrologic conditions, we observe an increase in precipitation depths required to trigger groundwater response and an increase in the timing of groundwater response (i.e. the lag-time between the onset of a precipitation event and groundwater rise). This behaviour can be explained by the subsurface structure with solum, subsolum, and fractured bedrock showing decreasing storage capacity with depth. A Spearman rank (r&lt;sub&gt;s&lt;/sub&gt;) correlation analysis reveals a lack of significant correlation between the observed minimum precipitation depth needed to trigger groundwater response with the local thickness of the subsurface layer, as well as with the distance from and the elevation above the stream channel. However, both the increase in groundwater level &amp;#160;and the timing of the groundwater response are positively correlated with the thickness of the solum and subsolum layers and with the distance and the elevation from the stream channel, but only during wet conditions. These results suggest that during wet conditions the spatial differences in the groundwater dynamics are mostly controlled by the regolith depth above the fractured bedrock. However, during dry conditions, local changes in the storage capacities of the fractured bedrock or the presence of preferential flowpaths in the fractured schist matrix could control the spatially heterogeneous timing of groundwater response. In the winter months, the groundwater flow direction points mostly toward the stream channel also many days after an event, suggesting that the groundwater flow from upslope locations controls the near-stream groundwater movement toward the stream channel during wet hydrologic conditions. However, during dry-out or long recessions, the groundwater table at the footslopes decreases to the stream level or below. In these conditions, the groundwater fall lines point toward the footslopes both in the summer and in the winter and in different sections of the stream reach. This study highlights the effect of different initial conditions, precipitation characteristics, streamflow, and potential water inflow from hillslopes on groundwater dynamics and groundwater surface-water exchange in the near stream domain.&lt;/p&gt;


2021 ◽  
Vol 55 (5) ◽  
pp. 1136-1150
Author(s):  
Giovanni Righini

The single source Weber problem with limited distances (SSWPLD) is a continuous optimization problem in location theory. The SSWPLD algorithms proposed so far are based on the enumeration of all regions of [Formula: see text] defined by a given set of n intersecting circumferences. Early algorithms require [Formula: see text] time for the enumeration, but they were recently shown to be incorrect in case of degenerate intersections, that is, when three or more circumferences pass through the same intersection point. This problem was fixed by a modified enumeration algorithm with complexity [Formula: see text], based on the construction of neighborhoods of degenerate intersection points. In this paper, it is shown that the complexity for correctly dealing with degenerate intersections can be reduced to [Formula: see text] so that existing enumeration algorithms can be fixed without increasing their [Formula: see text] time complexity, which is due to some preliminary computations unrelated to intersection degeneracy. Furthermore, a new algorithm for enumerating all regions to solve the SSWPLD is described: its worst-case time complexity is [Formula: see text]. The new algorithm also guarantees that the regions are enumerated only once.


Author(s):  
Theodore Katsanis ◽  
W. D. McNally

This paper describes Fortran programs that give the solution to the two-dimensional, subsonic, nonviscous flow problem on a blade-to-blade surface of revolution of a turbomachine. Flow may be axial, radial, or mixed. There may be a change in stream channel thickness in the through-flow direction. Either single, tandem, or slotted blades may be handled as well as blade rows with splitter vanes. Also, small regions may be magnified to give more detail where desired, such as around a leading or trailing edge or through a slot. The method is based on a finite difference solution of the stream function equations. Numerical examples are shown to illustrate the type of blades which can be analyzed, and to show results which can be obtained. Results are compared with experimental data.


2007 ◽  
Vol 05 (02a) ◽  
pp. 201-250 ◽  
Author(s):  
S. TEWARI ◽  
S. M. BHANDARKAR ◽  
J. ARNOLD

A multi-locus likelihood of a genetic map is computed based on a mathematical model of chromatid exchange in meiosis that accounts for any type of bivalent configuration in a genetic interval in any specified order of genetic markers. The computational problem is to calculate the likelihood (L) and maximize L by choosing an ordering of genetic markers on the map and the recombination distances between markers. This maximum likelihood estimate (MLE) could be found either with a straightforward algorithm or with the proposed recursive linking algorithm that implements the likelihood computation process involving an iterative procedure is called Expectation Maximization (EM). The time complexity of the straightforward algorithm is exponential without bound in the number of genetic markers, and implementation of the model with a straightforward algorithm for more than seven genetic markers is not feasible, thus motivating the critical importance of the proposed recursive linking algorithm. The recursive linking algorithm decomposes the pool of genetic markers into segments and renders the model implementable for hundreds of genetic markers. The recursive algorithm is shown to reduce the order of time complexity from exponential to linear in the number of markers. The improvement in time complexity is shown theoretically by a worst-case analysis of the algorithm and supported by run time results using data on linkage group-II of the fungal genome Neurospora crassa.


Author(s):  
Fukang Liu ◽  
Takanori Isobe ◽  
Willi Meier ◽  
Kosei Sakamoto

AEGIS-128 and Tiaoxin-346 (Tiaoxin for short) are two AES-based primitives submitted to the CAESAR competition. Among them, AEGIS-128 has been selected in the final portfolio for high-performance applications, while Tiaoxin is a third-round candidate. Although both primitives adopt a stream cipher based design, they are quite different from the well-known bit-oriented stream ciphers like Trivium and the Grain family. Their common feature consists in the round update function, where the state is divided into several 128-bit words and each word has the option to pass through an AES round or not. During the 6-year CAESAR competition, it is surprising that for both primitives there is no third-party cryptanalysis of the initialization phase. Due to the similarities in both primitives, we are motivated to investigate whether there is a common way to evaluate the security of their initialization phases. Our technical contribution is to write the expressions of the internal states in terms of the nonce and the key by treating a 128-bit word as a unit and then carefully study how to simplify these expressions by adding proper conditions. As a result, we find that there are several groups of weak keys with 296 keys each in 5-round AEGIS-128 and 8-round Tiaoxin, which allows us to construct integral distinguishers with time complexity 232 and data complexity 232. Based on the distinguisher, the time complexity to recover the weak key is 272 for 5-round AEGIS-128. However, the weak key recovery attack on 8-round Tiaoxin will require the usage of a weak constant occurring with probability 2−32. All the attacks reach half of the total number of initialization rounds. We expect that this work can advance the understanding of the designs similar to AEGIS and Tiaoxin.


2012 ◽  
Vol 241-244 ◽  
pp. 2845-2848 ◽  
Author(s):  
Hai Yan Zhou

K-means clustering algorithm is simple and fast, and has more intuitive geometric meaning, which has been widely applied in pattern recognition, image processing and computer vision. It has obtained satisfactory results. But it need to determine the initial cluster class center before executing the k-means algorithm, and the choice of the initial cluster class center has a direct impact on the final clustering results. A selection algorithm is proposed, which based on figure node most magnanimous to determine the initial cluster class center of K-means clustering algorithm. The method compares with the selection algorithm of other initial cluster class center, which has a simple algorithm idea and low time complexity, and it is significantly better than other clustering arithmetic.


Author(s):  
Simant Dube

AbstractA relationship between the fractal geometry and the analysis of recursive (divide-and-conquer) algorithms is investigated. It is shown that the dynamic structure of a recursive algorithm which might call other algorithms in a mutually recursive fashion can be geometrically captured as a fractal (self-similar) image. This fractal image is defined as the attractor of a mutually recursive function system. It then turns out that the Hausdorff-Besicovich dimension D of such an image is precisely the exponent in the time complexity of the algorithm being modelled. That is, if the Hausdorff D-dimensional measure of the image is finite then it serves as the constant of proportionality and the time complexity is of the form Θ(nD), else it implies that the time complexity is of the form Θ(nD logpn), where p is an easily determined constant.


2008 ◽  
Vol 54 (No. 6) ◽  
pp. 255-261 ◽  
Author(s):  
J. Kumhálová ◽  
Š. Matějková ◽  
M. Fifernová ◽  
J. Lipavský ◽  
F. Kumhála

The main aim of this study was to determine the dependence of yield and selected soil properties on topography of the experimental field by using topographical data (elevation, slope and flow accumulation). The topography and yield data were obtained from a yield monitor for combine harvester, and soil properties data were taken from sampling points of our experimental field. Initially, the topographical parameters of elevation and slope were estimated and then the Digital Elevation Model (DEM) grid was created. On the basis of field slope the flow direction model and the flow accumulation model were created. The flow accumulation model, elevation and slope were then compared with the yield and content of nitrogen and organic carbon in soil in the years 2004, 2005 and 2006 in relation to the sum of precipitation and temperatures in crop growing seasons of these years. The correlation analysis of all previously mentioned elements was calculated and statistical evaluation proved a significant dependence of yield and soil nutrition content on flow accumulation. For the wettest evaluated year the correlation coefficient 0.25 was calculated, for the driest year it was 0.62.


Sign in / Sign up

Export Citation Format

Share Document