scholarly journals Modelling head loss along emitting pipes using dimensional analysis

2015 ◽  
Vol 35 (3) ◽  
pp. 442-457 ◽  
Author(s):  
Acácio Perboni ◽  
Jose A. Frizzone ◽  
Antonio P. de Camargo ◽  
Marinaldo F. Pinto

Local head losses must be considered in estimating properly the maximum length of drip irrigation laterals. The aim of this work was to develop a model based on dimensional analysis for calculating head loss along laterals accounting for in-line drippers. Several measurements were performed with 12 models of emitters to obtain the experimental data required for developing and assessing the model. Based on the Camargo & Sentelhas coefficient, the model presented an excellent result in terms of precision and accuracy on estimating head loss. The deviation between estimated and observed values of head loss increased according to the head loss and the maximum deviation reached 0.17 m. The maximum relative error was 33.75% and only 15% of the data set presented relative errors higher than 20%. Neglecting local head losses incurred a higher than estimated maximum lateral length of 19.48% for pressure-compensating drippers and 16.48% for non pressure-compensating drippers.

Author(s):  
Václav Matoušek ◽  
Robert Visintainer ◽  
John Furlan ◽  
Anders Sellgren

Abstract Pipe flows of bimodal settling slurries exhibit frictional head losses quite different from those determined simply as a sum of loss contributions by the individual fractions. Mechanisms governing flow friction and resulting from an interaction of grains of different fractions in transported slurry are not well understood. This makes a prediction of the frictional head loss in flows of bimodal slurries with Newtonian carrier uncertain. An extensive experimental campaign was conducted in GIW Hydraulic Laboratory in 2016 with slurries of four narrow graded fractions of the virtually same grain densities and very different grain sizes (carrier-liquid fraction, pseudo-homogeneous-, heterogeneous-, and stratified fractions). Besides testing of the individual fractions, different combinations of the fraction mixtures (bimodal, three- and four-component) were tested as well. In our previous work published in 2018, we employed experimental results for bimodal slurry composed of coarse granite rock (the stratified fraction) and fine sand (the pseudo-homogeneous fraction) to analyze the observed considerable reduction of the frictional head loss caused by an addition of the fine sand to the granite rock slurry. In this work, we extend our analysis to the other bimodal slurries composed of permutations of the four fractions (in total 3 additional bimodal slurries) with a major objective to identify possible mechanisms leading to a modification of the frictional head loss due to an addition of a finer fraction to a coarser mono-disperse slurry, and to quantify this effect for the purposes of a predictive four-component model (4CM). The investigation shows that the frictional loss of bimodal slurry is always smaller than the theoretical loss obtained as the sum of losses of the fractions, although the massive reduction observed in the slurry composed of the stratified rock and fine sand is not observed in any other bimodal slurry. The investigation also suggests that the friction effect obtained by the finer fraction addition is due to different mechanisms for different bimodal slurries although all mechanisms are associated with altering mechanical friction due to granular contacts. It is shown that the observed effects can be well reproduced by the friction loss model 4CM, calibrated by the experimental data set from the 203-mm pipe and validated by the data set from the 103-mm pipe.


2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


2021 ◽  
Vol 11 (15) ◽  
pp. 7104
Author(s):  
Xu Yang ◽  
Ziyi Huan ◽  
Yisong Zhai ◽  
Ting Lin

Nowadays, personalized recommendation based on knowledge graphs has become a hot spot for researchers due to its good recommendation effect. In this paper, we researched personalized recommendation based on knowledge graphs. First of all, we study the knowledge graphs’ construction method and complete the construction of the movie knowledge graphs. Furthermore, we use Neo4j graph database to store the movie data and vividly display it. Then, the classical translation model TransE algorithm in knowledge graph representation learning technology is studied in this paper, and we improved the algorithm through a cross-training method by using the information of the neighboring feature structures of the entities in the knowledge graph. Furthermore, the negative sampling process of TransE algorithm is improved. The experimental results show that the improved TransE model can more accurately vectorize entities and relations. Finally, this paper constructs a recommendation model by combining knowledge graphs with ranking learning and neural network. We propose the Bayesian personalized recommendation model based on knowledge graphs (KG-BPR) and the neural network recommendation model based on knowledge graphs(KG-NN). The semantic information of entities and relations in knowledge graphs is embedded into vector space by using improved TransE method, and we compare the results. The item entity vectors containing external knowledge information are integrated into the BPR model and neural network, respectively, which make up for the lack of knowledge information of the item itself. Finally, the experimental analysis is carried out on MovieLens-1M data set. The experimental results show that the two recommendation models proposed in this paper can effectively improve the accuracy, recall, F1 value and MAP value of recommendation.


2015 ◽  
Vol 15 (1) ◽  
pp. 253-272 ◽  
Author(s):  
M. R. Canagaratna ◽  
J. L. Jimenez ◽  
J. H. Kroll ◽  
Q. Chen ◽  
S. H. Kessler ◽  
...  

Abstract. Elemental compositions of organic aerosol (OA) particles provide useful constraints on OA sources, chemical evolution, and effects. The Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) is widely used to measure OA elemental composition. This study evaluates AMS measurements of atomic oxygen-to-carbon (O : C), hydrogen-to-carbon (H : C), and organic mass-to-organic carbon (OM : OC) ratios, and of carbon oxidation state (OS C) for a vastly expanded laboratory data set of multifunctional oxidized OA standards. For the expanded standard data set, the method introduced by Aiken et al. (2008), which uses experimentally measured ion intensities at all ions to determine elemental ratios (referred to here as "Aiken-Explicit"), reproduces known O : C and H : C ratio values within 20% (average absolute value of relative errors) and 12%, respectively. The more commonly used method, which uses empirically estimated H2O+ and CO+ ion intensities to avoid gas phase air interferences at these ions (referred to here as "Aiken-Ambient"), reproduces O : C and H : C of multifunctional oxidized species within 28 and 14% of known values. The values from the latter method are systematically biased low, however, with larger biases observed for alcohols and simple diacids. A detailed examination of the H2O+, CO+, and CO2+ fragments in the high-resolution mass spectra of the standard compounds indicates that the Aiken-Ambient method underestimates the CO+ and especially H2O+ produced from many oxidized species. Combined AMS–vacuum ultraviolet (VUV) ionization measurements indicate that these ions are produced by dehydration and decarboxylation on the AMS vaporizer (usually operated at 600 °C). Thermal decomposition is observed to be efficient at vaporizer temperatures down to 200 °C. These results are used together to develop an "Improved-Ambient" elemental analysis method for AMS spectra measured in air. The Improved-Ambient method uses specific ion fragments as markers to correct for molecular functionality-dependent systematic biases and reproduces known O : C (H : C) ratios of individual oxidized standards within 28% (13%) of the known molecular values. The error in Improved-Ambient O : C (H : C) values is smaller for theoretical standard mixtures of the oxidized organic standards, which are more representative of the complex mix of species present in ambient OA. For ambient OA, the Improved-Ambient method produces O : C (H : C) values that are 27% (11%) larger than previously published Aiken-Ambient values; a corresponding increase of 9% is observed for OM : OC values. These results imply that ambient OA has a higher relative oxygen content than previously estimated. The OS C values calculated for ambient OA by the two methods agree well, however (average relative difference of 0.06 OS C units). This indicates that OS C is a more robust metric of oxidation than O : C, likely since OS C is not affected by hydration or dehydration, either in the atmosphere or during analysis.


2013 ◽  
Vol 17 (7) ◽  
pp. 2781-2796 ◽  
Author(s):  
S. Shukla ◽  
J. Sheffield ◽  
E. F. Wood ◽  
D. P. Lettenmaier

Abstract. Global seasonal hydrologic prediction is crucial to mitigating the impacts of droughts and floods, especially in the developing world. Hydrologic predictability at seasonal lead times (i.e., 1–6 months) comes from knowledge of initial hydrologic conditions (IHCs) and seasonal climate forecast skill (FS). In this study we quantify the contributions of two primary components of IHCs – soil moisture and snow water content – and FS (of precipitation and temperature) to seasonal hydrologic predictability globally on a relative basis throughout the year. We do so by conducting two model-based experiments using the variable infiltration capacity (VIC) macroscale hydrology model, one based on ensemble streamflow prediction (ESP) and another based on Reverse-ESP (Rev-ESP), both for a 47 yr re-forecast period (1961–2007). We compare cumulative runoff (CR), soil moisture (SM) and snow water equivalent (SWE) forecasts from each experiment with a VIC model-based reference data set (generated using observed atmospheric forcings) and estimate the ratio of root mean square error (RMSE) of both experiments for each forecast initialization date and lead time, to determine the relative contribution of IHCs and FS to the seasonal hydrologic predictability. We find that in general, the contributions of IHCs to seasonal hydrologic predictability is highest in the arid and snow-dominated climate (high latitude) regions of the Northern Hemisphere during forecast periods starting on 1 January and 1 October. In mid-latitude regions, such as the Western US, the influence of IHCs is greatest during the forecast period starting on 1 April. In the arid and warm temperate dry winter regions of the Southern Hemisphere, the IHCs dominate during forecast periods starting on 1 April and 1 July. In equatorial humid and monsoonal climate regions, the contribution of FS is generally higher than IHCs through most of the year. Based on our findings, we argue that despite the limited FS (mainly for precipitation) better estimates of the IHCs could lead to improvement in the current level of seasonal hydrologic forecast skill over many regions of the globe at least during some parts of the year.


Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 642 ◽  
Author(s):  
Erlandson Saraiva ◽  
Adriano Suzuki ◽  
Luis Milan

In this paper, we study the performance of Bayesian computational methods to estimate the parameters of a bivariate survival model based on the Ali–Mikhail–Haq copula with marginal distributions given by Weibull distributions. The estimation procedure was based on Monte Carlo Markov Chain (MCMC) algorithms. We present three version of the Metropolis–Hastings algorithm: Independent Metropolis–Hastings (IMH), Random Walk Metropolis (RWM) and Metropolis–Hastings with a natural-candidate generating density (MH). Since the creation of a good candidate generating density in IMH and RWM may be difficult, we also describe how to update a parameter of interest using the slice sampling (SS) method. A simulation study was carried out to compare the performances of the IMH, RWM and SS. A comparison was made using the sample root mean square error as an indicator of performance. Results obtained from the simulations show that the SS algorithm is an effective alternative to the IMH and RWM methods when simulating values from the posterior distribution, especially for small sample sizes. We also applied these methods to a real data set.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10681
Author(s):  
Jake Dickinson ◽  
Marcel de Matas ◽  
Paul A. Dickinson ◽  
Hitesh B. Mistry

Purpose To assess whether a model-based analysis increased statistical power over an analysis of final day volumes and provide insights into more efficient patient derived xenograft (PDX) study designs. Methods Tumour xenograft time-series data was extracted from a public PDX drug treatment database. For all 2-arm studies the percent tumour growth inhibition (TGI) at day 14, 21 and 28 was calculated. Treatment effect was analysed using an un-paired, two-tailed t-test (empirical) and a model-based analysis, likelihood ratio-test (LRT). In addition, a simulation study was performed to assess the difference in power between the two data-analysis approaches for PDX or standard cell-line derived xenografts (CDX). Results The model-based analysis had greater statistical power than the empirical approach within the PDX data-set. The model-based approach was able to detect TGI values as low as 25% whereas the empirical approach required at least 50% TGI. The simulation study confirmed the findings and highlighted that CDX studies require fewer animals than PDX studies which show the equivalent level of TGI. Conclusions The study conducted adds to the growing literature which has shown that a model-based analysis of xenograft data improves statistical power over the common empirical approach. The analysis conducted showed that a model-based approach, based on the first mathematical model of tumour growth, was able to detect smaller size of effect compared to the empirical approach which is common of such studies. A model-based analysis should allow studies to reduce animal use and experiment length providing effective insights into compound anti-tumour activity.


Irriga ◽  
2007 ◽  
Vol 12 (2) ◽  
pp. 225-234
Author(s):  
Silvio Cesar Sampaio ◽  
Elisandro Pires Frigo ◽  
Marcio Antonio Vilas Boas ◽  
Manoel M. F. de Queiroz ◽  
Benedito Martins Gomes ◽  
...  

PERDA DE CARGA EM TUBULAÇÕES E CONEXÕES CONDUZINDO ÁGUA RESIDUARIA DA AVICULTURA  Silvio Cesar Sampaio; Elisandro Pires Frigo; Marcio Antonio Vilas Boas; Manoel M. F. De Queiroz; Benedito Martins Gomes; Larissa Schmatz MallmannRecursos Hídricos e Saneamento Ambiental, Universidade Estadual do Oeste do Paraná, Cascavel, PR  1 RESUMO O presente trabalho visou estimar a perda de carga em tubulações comerciais utilizando como fluido circulante água residuária de avicultura (ARA). As tubulações utilizadas foram de aço galvanizado e PVC, com diâmetros variando entre32 a75 mm. Construiu-se bancadas de testes para perdas de carga localizada e distribuída. Registrou-se dados de vazão e pressão para os variados tubos e conexões, utilizando os diferentes materiais. Na avaliação dos dados encontrados, os mesmos foram tabelados e ajustados a modelos potenciais para a perda de carga distribuída e fator “k” para perda de carga localizada. Como parâmetro de comparação e avaliação também foram registrados dados utilizando como fluído circulante água de abastecimento urbano (AAU). A ARA apresentou em média uma diminuição de 42 e 21% no valor do coeficiente “C” de rugosidade da equação de Hazen-Williams, quando comparado com a AAU, para os tubos de PVC e aço galvanizado, respectivamente. Para a perda de carga distribuída, estima-se um aumento que varia de31 a8% com a ARA em relação à AAU, porém essa diferença é sujeita a variação da vazão. Nas conexões soldáveis a perda de carga localizada com a ARA foi maior que na AAU, ao contrário das conexões rosqueáveis. UNITERMOS: perda de carga, irrigação, hidráulica.  SAMPAIO, S. C.; FRIGO, E. P.; VILAS BOAS, M. A.; QUEIROZ, M. M. F. de; GOMES, B. M.; MALLMANN, L. S. HEAD LOSSES IN PIPELINES AND CONNECTIONS CARRYING POULTRY WASTEWATER  2 ABSTRACT An appropriate hydraulics system project requires knowledge on liquid behavior in pressurized piping. This work aimed to  evaluate head losses in pipelines and connections carrying poultry wastewater. Commercial  pipelines made of galvanized iron and PVC and diameters from1”to3”were used. Poultry wastewater presented an average decrease of 42 and 21% inHazen-Williams´s coefficient values, when compared to water in PVC and galvanized ion pipelines, respectively. In general, head loss in all pipelines increased from 31 to 8% with poltry wastewater in relation to water. The connection type affected the results in localized head loss with poultry wastewater. KEYWORDS: wastewater, irrigation, hydraulics


2020 ◽  
Vol 45 (3) ◽  
pp. 47-56
Author(s):  
Aline Amaral Madeira

Domestic and industrial hydraulic drainage networks have gradually become more complicated because of the cities’ rapid expansion. In surcharged hydraulic systems, the head losses may become rather significant, and should not be neglected because could result in several problems. This work presents an investigation about major and minor head losses in a hydraulic flow circuit, simulating the water transport in a drainage network at room temperature (298.15 K) under atmospheric pressure (101,325 Pa). The losses produced by the fluid viscous effect through the one used cast-iron rectilinear pipe (RP-11) and the localized losses generated by two flow appurtenances, one fully open ball valve (BV-1) and one module of forty-four 90º elbows (90E-8) were experimentally measured. Experimental data generated head-loss curves and their well fitted to potential regressions, displaying correlation coefficients (R2) of 0.9792, 0.9924, and 0.9820 for BV-1, 90E-8, and RP-11, respectively. Head loss experimental equations and local loss coefficients through BV-1 and 90E-8 were determined successfully. The Moody’s diagram application proved to be a quite appropriate tool for an approximate estimation of Darcy-Weisbach friction factor. A good approximation between friction factor values obtained via experimental measurements and the Moody’s diagram was observed with mean absolute deviate of 0.0136.


2018 ◽  
Vol 19 (1) ◽  
pp. 264-273 ◽  
Author(s):  
M. Kutyłowska

Abstract This paper presents the results of failure rate prediction by means of support vector machines (SVM) – a non-parametric regression method. A hyperplane is used to divide the whole area in such a way that objects of different affiliation are separated from one another. The number of support vectors determines the complexity of the relations between dependent and independent variables. The calculations were performed using Statistical 12.0. Operational data for one selected zone of the water supply system for the period 2008–2014 were used for forecasting. The whole data set (in which data on distribution pipes were distinguished from those on house connections) for the years 2008–2014 was randomly divided into two subsets: a training subset – 75% (5 years) and a testing subset – 25% (2 years). Dependent variables (λr for the distribution pipes and λp for the house connections) were forecast using independent variables (the total length – Lr and Lp and number of failures – Nr and Np of the distribution pipes and the house connections, respectively). Four kinds of kernel functions (linear, polynomial, sigmoidal and radial basis functions) were applied. The SVM model based on the linear kernel function was found to be optimal for predicting the failure rate of each kind of water conduit. This model's maximum relative error of predicting failure rates λr and λp during the testing stage amounted to about 4% and 14%, respectively. The average experimental failure rates in the whole analysed period amounted to 0.18, 0.44, 0.17 and 0.24 fail./(km·year) for the distribution pipes, the house connections and the distribution pipes made of respectively PVC and cast iron.


Sign in / Sign up

Export Citation Format

Share Document