Stream Flow Measurement: Development of a Relationship between the Float Method and the Current Meter Method

Author(s):  
Ishmael Kanu

<p>In diverse developments such as hydropower potential assessment, flood mitigation studies, water supply, irrigation, bridge and culvert hydraulics, the magnitude of stream or river flows is a potential design input. Several methods of flow measurement exist; some basic and some more sophisticated. The sophisticated methods use equipment which, although they provide more accurate and reliable results, are invariably expensive and unaffordable by many institutions that depend greatly on flow records to plan and execute their projects. The need for skilled expertise in the use of these equipment and the associated maintenance problems preclude them from consideration in most projects developed and executed in developing regions such as Africa. For countries or institutions in these regions, there is a need for less expensive, but relatively reliable methods for stream or river flow measurement to be investigated; methods that require no equipment maintenance schemes. One such method is the float method in which the velocity of an object thrown in a river is measured by recording the time taken for the object to traverse a known distance and multiplying the velocity by the cross-sectional area of the river or stream. This method looks simplistic, but when flows obtained from it are correlated with those obtained from the more accurate and conventional methods, reliable results can be obtained. In this study, flow measurements were done at 42 different stream sections using the float method and a more reliable and generally accepted but expensive flow measurement method using a current meter. A statistical relationship was then developed between the flows obtained by the two methods by fitting a linear regression model to the set of data points obtained at the 42 locations on several reaches of selected streams in the western area of Freetown.  The study was conducted on streams with tranquil or laminar flow with flow magnitudes in the range of 0.39 m3/s to 4 m3/s in practically straight reaches with stable banks. The material of the stream beds was laterite soil. Thirty-two data sets were used to develop and calibrate the model and the remaining ten data sets were used to verify the model. The current meter method flows were regressed on the float method flows. For a significance level of 5%, the predicted flows of a current meter, given a float method flow, showed a high level of agreement with the observed current meter flows for the tested data set. </p>

2017 ◽  
Vol 51 (3) ◽  
pp. 288-314 ◽  
Author(s):  
Silvia Collado ◽  
Henk Staats ◽  
Patricia Sancho

Pro-environmental behavioral patterns are influenced by relevant others’ actions and expectations. Studies about the intergenerational transmission of environmentalism have demonstrated that parents play a major role in their children’s pro-environmental actions. However, little is known about how other social agents may shape youth’s environmentalism. This cross-sectional study concentrates on the role that parents and peers have in the regulation of 12- to 19-year-olds’ pro-environmental behaviors. We also consider the common response bias effect by examining the associations between parents, peers, and adolescents’ pro-environmentalism in two independent data sets. Data Set 1 ( N = 330) includes adolescents’ perceptions of relevant others’ behaviors. Data Set 2 ( N = 152) includes relevant others’ self-reported pro-environmental behavior. Our results show that parents’ and peers’ descriptive and injunctive norms have a direct effect on adolescents’ pro-environmental behavior and an indirect one, through personal norms. Adolescents seem to be accurate in the perception of their close ones’ environmental actions.


2003 ◽  
Vol 35 (2) ◽  
pp. 415-421
Author(s):  
Matthew C. Stockton

Cross-sectional data sets containing expenditure and quantity information are typically used to calculate quality-adjusted imputed prices. Do sample size and quality adjustment of price statistically alter estimates for own-price elasticities? This paper employs a data set pertaining to three food categories—pork, cheese, and food away from home—with four sample sizes for each food category. Twelve sample sizes were used for both adjusted and unadjusted prices to derive elasticities. No statistical differences were found between own-price elasticities among sample sizes. However, elasticities that were based on adjusted price imputations were significantly different from those that were based on unadjusted prices.


1985 ◽  
Vol 12 (3) ◽  
pp. 464-471 ◽  
Author(s):  
B. G. Krishnappan

The MOBED and HEC-6 models of river flow were compared in this study. The comparison consisted of two steps. In step one, the major differences between the models were identified by examining the theoretical base of each model. In step two, the predictive capabilities of the models were compared by applying the models to identical data sets. The data set comes from the South Saskatchewan River reach below Gardiner Dam and relates to the degradation process that has taken place since the creation of Lake Diefenbaker. Comparison of model predictions with measurements reveals that MOBED has predictive capability superior to that of HEC-6 and that use of HEC-6 as a predictive tool requires an extensive model calibration by the adjustment of Manning's 'n' and the moveable bed width. Key words: computers, models, sediment transport, river hydraulics erosion.


2019 ◽  
Author(s):  
Alexandru V. Avram ◽  
Adam S. Bernstein ◽  
M. Okan Irfanoglu ◽  
Craig C. Weinkauf ◽  
Martin Cota ◽  
...  

AbstractWe describe a pipeline for constructing a study-specific template of diffusion propagators measured with mean apparent propagator (MAP) MRI that supports direct voxelwise analysis of differences between propagators across multiple data sets. The pipeline leverages the fact that MAP-MRI is a generalization of diffusion tensor imaging (DTI) and combines simple and robust processing steps from existing tensor-based image registration methods. First, we compute a DTI study template which provides the reference frame and scaling parameters needed to construct a standardized set of MAP-MRI basis functions at each voxel in template space. Next, we transform each subjects diffusion data, including diffusion weighted images (DWIs) and gradient directions, from native to template space using the corresponding tensor-based deformation fields. Finally, we fit MAP coefficients in template space to the transformed DWIs of each subject using the standardized template of MAP basis functions. The consistency of MAP basis functions across all data sets in template space allows us to: 1. compute a template of propagators by directly averaging MAP coefficients and 2. quantify voxelwise differences between co-registered propagators using the angular dissimilarity, or a probability distance metric, such as the Jensen-Shannon Divergence. We illustrate the application of this method by generating a template of MAP propagators for a cohort of healthy volunteers and show a proof-of-principle example of how this pipeline may be used to detect subtle differences between propagators in a single-subject longitudinal clinical data set. The ability to standardize and analyze multiple clinical MAP-MRI data sets could improve assessments in cross-sectional and single-subject longitudinal clinical studies seeking to detect subtle microstructural changes, such as those occurring in mild traumatic brain injury (mTBI), or during the early stages of neurodegenerative diseases, or cancer.


2007 ◽  
pp. 24-28
Author(s):  
N. N. Brimkulov ◽  
D. V. Vinnikov ◽  
E. V. Ryzhkova

A cross-sectional analysis of management of asthma patients before and after short-term training of 78 physicians in Bishkek was performed. At baseline, diagnosis of asthma was made in 37 patients (4.1 % of all respiratory diseases). Just after training, in 1 year and in 2 years, asthma was diagnosed in 26, 45 and 26 patients, respectively. At baseline, peak flow measurement and spirometry were not used at all and treatment was mainly symptomatic. The training resulted in improvement of theoretical knowledge score from 84.6 % to 73.3 %; p < 0.001. Use of peak flow measurements increased to 38.5 %, 51.1 % and 38.5 % just after the training, in 1 and 2 years, respectively. Use of spirometry grew to 11.5 %, 17.8 % and 26.9 %, respectively. Inhaled corticosteroids (ICS) were administered to 42.3 %, 53.3 %, and 46.2 %, respectively, vs. 5.4 % at baseline with simultaneous reduction in inadequate administrations of vitamins, antibiotics and expectorants. So, the short-term training was effective. However, application of peak flow measurement in 100 % of the patients should be achieved; the majority of patients need ICS. Ways to increase the training efficiency are necessary.


2021 ◽  
Author(s):  
Shuaijun Li ◽  
Jia Lu

Abstract Self-training algorithm can quickly train an supervised classifier through a few labeled samples and lots of unlabeled samples. However, self-training algorithm is often affected by mislabeled samples, and local noise filter is proposed to detect the mislabeled samples. Nevertheless, current local noise filters have the problems: (a) Current local noise filters ignore the spatial distribution of the nearest neighbors in different classes. (b) They can’t perform well when mislabeled samples are located in the overlapping areas of different classes. To solve the above challenges, a new self-training algorithm based on density peaks combining globally adaptive multi-local noise filter (STDP-GAMNF) is proposed. Firstly, the spatial structure of data set is revealed by density peak clustering, and it is used for helping self-training to label unlabeled samples. In the meantime, after each epoch of labeling, GAMLNF can comprehensively judge whether a sample is a mislabeled sample from multiple classes or not, and will reduce the influence of edge samples effectively. The corresponding experimental results conducted on eighteen real-world data sets demonstrate that GAMLNF is not sensitive to the value of the neighbor parameter k, and it can be adaptive to find the appropriate number of neighbors of each class.


2021 ◽  
Author(s):  
Jan Hackenberg ◽  
Kim Calders ◽  
Miro Demol ◽  
Pasi Raumonen ◽  
Alexandre Piboule ◽  
...  

The here-on presented SimpleForest is written in C++ and published under GPL v3. As input data SimpleForest utilizes forestry scenes recorded as terrestrial laser scan clouds. SimpleForest provides a fully automated pipeline to model the ground as a digital terrain model, then segment the vegetation and finally build quantitative structure models of trees (QSMs) consisting of up to thousands of topologically ordered cylinders. These QSMs allow us to calculate traditional forestry metrics such as diameter at breast height, but also volume and other structural metrics that are hard to measure in the field. Our volume evaluation on three data sets with destructive volumes show high prediction qualities with concordance correlation coefficient CCC (r2 adj.) of 0.91 (0.87), 0.94 (0.92) and 0.97 (0.93) for each data set respectively. We combine two common assumptions in plant modeling The sum of cross sectional areas after a branch junction equals the one before the branch junction (Pipe Model Theory) and Twigs are self-similar (West, Brown and Enquist model). As even sized twigs correspond to even sized cross sectional areas for twigs we define the Reverse Pipe Radius Branchorder (RPRB) as the square root of the number of supported twigs. The prediction model radius = B 0 ∗ RP RB relies only on correct topological information and can be used to detect and correct overestimated cylinders. In QSM building the necessity to handle overestimated cylinders is well known. The RPRB correction performs better with a CCC (r2 adj.) of 0.97 (0.93) than former published ones 0.80 (0.88) and 0.86 (0.85) in our validation. We encourage forest ecologists to analyze output parameters such as the GrowthVolume published in earlier works, but also other parameters such as the GrowthLength, VesselVolume and RPRB which we define in this manuscript.


2017 ◽  
Vol 30 (3) ◽  
pp. 235-247 ◽  
Author(s):  
Alison Leary ◽  
Barbara Tomai ◽  
Adrian Swift ◽  
Andrew Woodward ◽  
Keith Hurst

Purpose Despite the generation of mass data by the nursing workforce, determining the impact of the contribution to patient safety remains challenging. Several cross-sectional studies have indicated a relationship between staffing and safety. The purpose of this paper is to uncover possible associations and explore if a deeper understanding of relationships between staffing and other factors such as safety could be revealed within routinely collected national data sets. Design/methodology/approach Two longitudinal routinely collected data sets consisting of 30 years of UK nurse staffing data and seven years of National Health Service (NHS) benchmark data such as survey results, safety and other indicators were used. A correlation matrix was built and a linear correlation operation was applied (Pearson product-moment correlation coefficient). Findings A number of associations were revealed within both the UK staffing data set and the NHS benchmarking data set. However, the challenges of using these data sets soon became apparent. Practical implications Staff time and effort are required to collect these data. The limitations of these data sets include inconsistent data collection and quality. The mode of data collection and the itemset collected should be reviewed to generate a data set with robust clinical application. Originality/value This paper revealed that relationships are likely to be complex and non-linear; however, the main contribution of the paper is the identification of the limitations of routinely collected data. Much time and effort is expended in collecting this data; however, its validity, usefulness and method of routine national data collection appear to require re-examination.


2019 ◽  
Vol 53 (7) ◽  
pp. 529-535 ◽  
Author(s):  
R. Eugene Zierler ◽  
Daniel F. Leotta ◽  
Kurt Sansom ◽  
Alberto Aliseda ◽  
Mark D. Anderson ◽  
...  

Objective:We developed a duplex ultrasound simulator and used it to assess accuracy of volume flow measurements in dialysis access fistula (DAF) models.Methods:The simulator consists of a mannequin, computer, and mock transducer. Each case is built from a patient’s B-mode images that are used to create a 3-dimensional surface model of the DAF. Computational fluid dynamics is used to determine blood flow velocities based on model vessel geometry. The simulator displays real-time B-mode and color-flow images, and Doppler spectral waveforms are generated according to user-defined settings. Accuracy was assessed by scanning each case and measuring volume flow in the inflow artery and outflow vein for comparison with true volume flow values.Results:Four examiners made 96 volume flow measurements on four DAF models. Measured volume flow deviated from the true value by 35 ± 36%. Mean absolute deviation from true volume flow was lower for arteries than veins (22 ± 19%, N = 48 vs. 58 ± 33%, N = 48, p < 0.0001). This finding is attributed to eccentricity of outflow veins which resulted in underestimating true cross-sectional area. Regression analysis indicated that error in measuring cross-sectional area was a predictor of error in volume flow measurement (β = 0.948, p < 0.001). Volume flow error was reduced from 35 ± 36% to 9 ± 8% ( p < 0.000001) by calculating vessel area as an ellipse.Conclusions:Duplex volume flow measurements are based on a circular vessel shape. DAF inflow arteries are circular, but outflow veins can be elliptical. Simulation-based analysis showed that error in measuring volume flow is mainly due to assumption of a circular vessel.


Author(s):  
Chek T. Lim ◽  
Mark T. Ensz

Abstract In this paper, we present a new technique for constructing mathematical representations of solids from cross-sectional data sets. A collection of 2D cross-sections is generated from the sliced data by merging circular primitives using Implicit Solid Modelling (ISM) techniques which approximate Boolean unions. The spatial locations and radii of the circles for each slice are determined through a nonlinear optimization process. The cost function employed in these optimizations is a measure of discrepancies in the distance from points to the boundary of the reconstructed cross-section. The starting configuration of the optimization, (i.e. initial size and location of the primitives) is determined from a 2D Delaunay triangulation of each slice of the data set. A morphing technique utilizing blending functions is applied to merge the implicit functions describing each slice into a 3D solid. The effectiveness of the algorithm is demonstrated through the reconstruction of several sample data sets, including a femur and a vertebra.


Sign in / Sign up

Export Citation Format

Share Document