Physically based metrics to evaluate the hydraulic distance between the drainage network and a DEM cell

Author(s):  
Giovanni Menduni ◽  
Daniele Bignami ◽  
Carlo De Michele ◽  
Michele Del Vecchio ◽  
Aravind Harikumar

<div> <p>The distance between the drainage network and a generic pixel of a DEM is an important indicator for different categories of geomorphologic and hydrologic processes, particularly as far as the analysis of susceptibility to flood is concerned (Tehrany, Pradhan, & Jebur, 2014). </p> <div> <div>On the DEM domain D ⊂ ℜ<sup>3</sup> and its subset given by the hydraulic network N ⊂ D, the distance is a function d: N x D → ℜ. The problem is far from uniquely determined, particularly in the field of flood susceptibility. In this specific case literature tends to consider two different distances, horizontal and vertical, given in theory by the projection of the actual distance on the two directions. Presently, the problem is effectively divided into substantially disconnected approaches.</div> <div> <p>Several authors, for the horizontal distance, use forms of Euclidean distance. Generally (Tehrany, Pradhan, & Jebur, 2014), (Tehrany, et al., 2017), (Lee, Kang, & Jeon, 2012), (Tehrany, Lee, Pradhan, Jebur, & Lee, 2014), (Khosravi, et al., 2018), (Rahmati, Pourghasemi, & Zeinivand, 2016) the distance is discretized in classes via buffers of progressively increasing size. The vertical distance, on the other hand, is determined as the absolute difference between the elevations. A different approach is taken from (Samela, et al., 2015), (Manfreda, et al., 2015), (Manfreda, Samela, Sole, & Fiorentino, 2014), (Samela, Troy, & Manfreda, 2017), who consider the flow distance, viz. the distance along the hydraulic path. This procedure firstly identifies for each point of DEM the nearest downstream element of the drainage network, and then calculates the difference between the corresponding elevations.</p> <div> <p>The flow distance well describes processes driven by gravity. Flood processes do not fall into these cases being governed by the hydraulic head difference between the river and the adjacent territory (the flow generally occurs with an adverse elevation gradient). Thus, the flooding will not follow classic direct runoff paths. For this, in order to quantify properly the distance (hereafter denominated “hydraulic distance”) between the drainage network and a DEM cell, an original model is introduced in which a flood process is simulated with a simple 2D unsteady flow parabolic model according to (Bates & De Roo, 2000) and implemented via a cellular automaton scheme. For each pixel of DEM, firstly we have determined the closest upstream pixel of the drainage network, and then the vertical distance as the difference of the two elevations. </p> <p>The model allows to improve the flood susceptibility of the territory. Results, generated on a huge number of DEMs, are quite encouraging. Developments are in progress to decrease computational time and memory storage size.</p> </div> </div> </div> </div>

2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 627
Author(s):  
David Marquez-Viloria ◽  
Luis Castano-Londono ◽  
Neil Guerrero-Gonzalez

A methodology for scalable and concurrent real-time implementation of highly recurrent algorithms is presented and experimentally validated using the AWS-FPGA. This paper presents a parallel implementation of a KNN algorithm focused on the m-QAM demodulators using high-level synthesis for fast prototyping, parameterization, and scalability of the design. The proposed design shows the successful implementation of the KNN algorithm for interchannel interference mitigation in a 3 × 16 Gbaud 16-QAM Nyquist WDM system. Additionally, we present a modified version of the KNN algorithm in which comparisons among data symbols are reduced by identifying the closest neighbor using the rule of the 8-connected clusters used for image processing. Real-time implementation of the modified KNN on a Xilinx Virtex UltraScale+ VU9P AWS-FPGA board was compared with the results obtained in previous work using the same data from the same experimental setup but offline DSP using Matlab. The results show that the difference is negligible below FEC limit. Additionally, the modified KNN shows a reduction of operations from 43 percent to 75 percent, depending on the symbol’s position in the constellation, achieving a reduction 47.25% reduction in total computational time for 100 K input symbols processed on 20 parallel cores compared to the KNN algorithm.


2021 ◽  
Vol 21 (S1) ◽  
Author(s):  
Dávid Paár ◽  
Antal Kovács ◽  
Miklós Stocker ◽  
Márk Hoffbauer ◽  
Attila Fazekas ◽  
...  

Abstract Background The so-called sports consumption models are looking for the factors that influence the sports spending of households. This paper aims to examine the Hungarian, Polish and German households’ sports expenditures which can be an important indicator of physical activity and sporty lifestyle. Methods Surveying of households in three countries (Hungary, Poland and Germany) has been conducted with a self-designed questionnaire. We have used descriptive and bivariate non-parametric and parametric statistical methods: (1) χ2 test, Mann-Whitney test and Kruskal-Wallis test for checking the relationship between sociodemographic and physical activity variables and (2) independent sample t-test and ANOVA for checking the differences in sports expenditures. Results Our research concluded that men, especially previous athletes, exercise more than women and those who have no history as registered athletes. The choice of sports venues is obviously different between the countries in the sample. Members of the study population spend the most on sports services while they spend the least on sports equipment. German households have the highest spending rates compared to the other two countries. Conclusions Results are in line with our previous research findings and with other literatures. The difference in preferences of sports venues could have the reason of different supply of sports clubs or the different living standards too. It needs further researches to clear it. Material wealth, income level and sport socialisation can be a determining factor regarding the level of sports spending.


Author(s):  
Jae-bok Lee ◽  
Jun Zou ◽  
Benliang Li ◽  
Munno Ju

Purpose – The per-unit-length earth return mutual impedance of the overhead conductors plays an important role for analyzing electromagnetic transients or couplings of multi-conductor systems. It is impossible to have a closed-form expression to evaluate this kind of impedance. The purpose of this paper is to propose an efficient numerical approach to evaluate the earth return mutual impedance of the overhead conductors above horizontally multi-layered soils. Design/methodology/approach – The expression of the earth return mutual impedance, which contains a complex highly oscillatory semi-infinite integral, is divided into two parts intentionally, i.e. the definite and the tail integral, respectively. The definite integral is calculated using the proposed moment functions after fitting the integrand into the piecewise cubic spline functions, and the tail integral is replaced by exponential integrals with newly developed asymptotic integrands. Findings – The numerical examples show the proposed approach has a satisfactory accuracy for different parameter combinations. Compared to the direct quadrature approach, the computational time of the proposed approach is very competitive, especially, for the large horizontal distance and the low height of the conductors. Originality/value – The advantage of the proposed approach is that the calculation of the highly oscillatory integral is completely avoided due to the fact that the moment function can be evaluated analytically. The contribution of the tail integral is well included by means of the exponential integral, though in an asymptotic way. The proposed approach is completely general, and can be applied to calculate the earth return mutual impedance of overhead conductors above a soil structure with an arbitrary number of horizontal layers.


2006 ◽  
Vol 3 (4) ◽  
pp. 384-388 ◽  
Author(s):  
Damiano Di Penta ◽  
Karim Bencherif ◽  
Michel Sorine ◽  
Qinghua Zhang

This paper proposes a reduced fuel cell stack model for control and fault diagnosis which was validated with experimental data. Firstly, the electro-chemical phenomena are modeled based on a mechanism of gas adsorption/desorption on catalysts at the anode and at the cathode of the stack, including activation, diffusion, and carbon monoxide poisoning. The electrical voltage of a stack cell is then modeled by the difference between the two electrode potentials. A simplified thermal model of the fuel cell stack is also developed in order to take into account heat generation from reactions, heat transfers, and evaporation/condensation of water. Finally, the efficiency ratio is computed as a model output. It is used to evaluate the efficiency changes of the entire system, providing an important indicator for fault detection.


Author(s):  
Vincent Delos ◽  
Santiago Arroyave-Tobón ◽  
Denis Teissandier

In mechanical design, tolerance zones and contact gaps can be represented by sets of geometric constraints. For computing the accumulation of possible manufacturing defects, these sets have to be summed and/or intersected according to the assembly architecture. The advantage of this approach is its robustness for treating even over-constrained mechanisms i.e. mechanisms in which some degrees of freedom are suppressed in a redundant way. However, the sum of constraints, which must be computed when simulating the accumulation of defects in serial joints, is a very time-consuming operation. In this work, we compare three methods for summing sets of constraints using polyhedral objects. The difference between them lie in the way the degrees of freedom (DOFs) (or invariance) of joints and features are treated. The first method proposes to virtually limit the DOFs of the toleranced features and joints to turn the polyhedra into polytopes and avoid manipulating unbounded objects. Even though this approach enables to sum, it also introduces bounding or cap facets which increase the complexity of the operand sets. This complexity increases after each operation until becoming far too significant. The second method aims to face this problem by cleaning, after each sum, the calculated polytope to keep under control the effects of the propagation of the DOFs. The third method is new and based on the identification of the sub-space in which the projection of the operands are bounded sets. Calculating the sum in this sub-space allows reducing significantly the operands complexity and consequently the computational time. After presenting the geometric properties on which the approaches rely, we demonstrate them on an industrial case. Then we compare the computation times and deduce the equality of the results of all the methods.


2021 ◽  
Author(s):  
Brett W. Larsen ◽  
Shaul Druckmann

AbstractLateral and recurrent connections are ubiquitous in biological neural circuits. The strong computational abilities of feedforward networks have been extensively studied; on the other hand, while certain roles for lateral and recurrent connections in specific computations have been described, a more complete understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Previous key studies by Minsky and later by Roelfsema argued that the sequential, parallel computations for which recurrent networks are well suited can be highly effective approaches to complex computational problems. Such “tag propagation” algorithms perform repeated, local propagation of information and were introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and demonstrate hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to multiple, interacting propagating tags and demonstrate that these are efficient computational substrates for more general computations by introducing and solving an abstracted biologically inspired decision-making task. More generally, our work clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.Author SummaryLateral and recurrent connections are ubiquitous in biological neural circuits; intriguingly, this stands in contrast to the majority of current-day artificial neural network research which primarily uses feedforward architectures except in the context of temporal sequences. This raises the possibility that part of the difference in computational capabilities between real neural circuits and artificial neural networks is accounted for by the role of recurrent connections, and as a result a more detailed understanding of the computational role played by such connections is of great importance. Making effective comparisons between architectures is a subtle challenge, however, and in this paper we leverage the computational capabilities of large-scale machine learning to robustly explore how differences in architectures affect a network’s ability to learn a task. We first focus on the task of determining whether two pixels are connected in an image which has an elegant and efficient recurrent solution: propagate a connected label or tag along paths. Inspired by this solution, we show that it can be generalized in many ways, including propagating multiple tags at once and changing the computation performed on the result of the propagation. To illustrate these generalizations, we introduce an abstracted decision-making task related to foraging in which an animal must determine whether it can avoid predators in a random environment. Our results shed light on the set of computational tasks that can be solved efficiently by recurrent computation and how these solutions may appear in neural activity.


Author(s):  
Suvojit Acharjee ◽  
Sayan Chakraborty ◽  
Wahiba Ben Abdessalem Karaa ◽  
Ahmad Taher Azar ◽  
Nilanjan Dey

Video is an important medium in terms of information sharing in this present era. The tremendous growth of video use can be seen in the traditional multimedia application as well as in many other applications like medical videos, surveillance video etc. Raw video data is usually large in size, which demands for video compression. In different video compressing schemes, motion vector is a very important step to remove the temporal redundancy. A frame is first divided into small blocks and then motion vector for each block is computed. The difference between two blocks is evaluated by different cost functions (i.e. mean absolute difference (MAD), mean square error (MSE) etc).In this paper the performance of different cost functions was evaluated and also the most suitable cost function for motion vector estimation was found.


1980 ◽  
Vol 239 (5) ◽  
pp. H703-H705
Author(s):  
D. Saito ◽  
R. A. Olsson

This study compared oxyhemoglobin saturation (SO2) and O2 content (CO2) estimated from O2 tension (PO2) by the Rossing-Cain nomogram (J. Appl. Physiol. 21: 195-201, 1966) with SO2 and CO2 estimated by a galvanometric O2 analyzer in blood samples from eight dogs. The nomogram consistently and significantly overestimated SO2 over the range of 20-60%. The greatest absolute difference, which averaged 10% saturation, was between 40 and 59% saturation. Between 30 and 39% saturation, the difference averaged 30% of SO2 estimated galvanometrically. CO2, calculated as the product of SO2, hemoglobin concentration (cyanmethemoglobin method), and hemoglobin O2 capacity, was significantly overestimated by the nomogram by as much as 1.2 ml/dl between 2 and 9.9 ml/dl. Between 14 and 21.9 ml/dl, the nomogram underestimated CO2 by as much as 1.2 ml/dl. We conclude that because coronary venous SO2 and CO2 values normally lie in the range of greatest error, estimates of these values based on PO2 are particularly unsuited for studies of myocardial O2 usage.


Geophysics ◽  
1998 ◽  
Vol 63 (2) ◽  
pp. 331-336 ◽  
Author(s):  
Gordon R. J. Cooper ◽  
Michael Q. W. Jones

A comparison is made between the effectiveness of the inversion of borehole temperature data (for the purpose of climate reconstruction) by the least‐squares (L2) technique and the minimization of the absolute difference between the observed and calculated data (L1) technique. The L1 technique is found to require approximately half the number of iterations to reach the practically achievable minimum error compared to the L2 technique. The choice of which technique to use depends on the statistics of the difference between the observed and calculated data, and it can be advantageous to switch techniques during the inversion process. The inversion damping is also adjusted during the course of the inversion, based on the rate of change of the difference between the observed and calculated data. The aim is to get the best fit of the model to the data while minimising the model size, in the minimum number of iterations. A method of adjusting the damping to achieve this is suggested.


Sign in / Sign up

Export Citation Format

Share Document