Computationally Efficient and Accurate Solution for Colebrook Equation Based On Lagrange Theorem

Author(s):  
Ahmed Amine Lamri ◽  
Said M Easa

Abstract Computationally efficient solutions (less computation time) for the Colebrook equation are important for the simulation of pipeline networks. However, the friction law resistance formula has an implicit form with respect to the friction factor. In the present study, computationally efficient accurate explicit solution for the friction head loss in pipeline networks is developed using the Lagrange inversion theorem. The results are in the form of fast converging power series. Truncated and regressed expressions are obtained using two and three terms of the expanded series that have maximum relative errors of 0.149% and 0.040%, respectively. The proposed solution is as computationally efficient as existing analytic solutions but provides a better accuracy in estimating the friction head loss.

2019 ◽  
Vol 13 (2) ◽  
pp. 174-180
Author(s):  
Poonam Sharma ◽  
Ashwani Kumar Dubey ◽  
Ayush Goyal

Background: With the growing demand of image processing and the use of Digital Signal Processors (DSP), the efficiency of the Multipliers and Accumulators has become a bottleneck to get through. We revised a few patents on an Application Specific Instruction Set Processor (ASIP), where the design considerations are proposed for application-specific computing in an efficient way to enhance the throughput. Objective: The study aims to develop and analyze a computationally efficient method to optimize the speed performance of MAC. Methods: The work presented here proposes the design of an Application Specific Instruction Set Processor, exploiting a Multiplier Accumulator integrated as the dedicated hardware. This MAC is optimized for high-speed performance and is the application-specific part of the processor; here it can be the DSP block of an image processor while a 16-bit Reduced Instruction Set Computer (RISC) processor core gives the flexibility to the design for any computing. The design was emulated on a Xilinx Field Programmable Gate Array (FPGA) and tested for various real-time computing. Results: The synthesis of the hardware logic on FPGA tools gave the operating frequencies of the legacy methods and the proposed method, the simulation of the logic verified the functionality. Conclusion: With the proposed method, a significant improvement of 16% increase in throughput has been observed for 256 steps iterations of multiplier and accumulators on an 8-bit sample data. Such an improvement can help in reducing the computation time in many digital signal processing applications where multiplication and addition are done iteratively.


2015 ◽  
Vol 35 (3) ◽  
pp. 442-457 ◽  
Author(s):  
Acácio Perboni ◽  
Jose A. Frizzone ◽  
Antonio P. de Camargo ◽  
Marinaldo F. Pinto

Local head losses must be considered in estimating properly the maximum length of drip irrigation laterals. The aim of this work was to develop a model based on dimensional analysis for calculating head loss along laterals accounting for in-line drippers. Several measurements were performed with 12 models of emitters to obtain the experimental data required for developing and assessing the model. Based on the Camargo & Sentelhas coefficient, the model presented an excellent result in terms of precision and accuracy on estimating head loss. The deviation between estimated and observed values of head loss increased according to the head loss and the maximum deviation reached 0.17 m. The maximum relative error was 33.75% and only 15% of the data set presented relative errors higher than 20%. Neglecting local head losses incurred a higher than estimated maximum lateral length of 19.48% for pressure-compensating drippers and 16.48% for non pressure-compensating drippers.


2017 ◽  
Author(s):  
Matthias Morzfeld ◽  
Jesse Adams ◽  
Spencer Lunderman ◽  
Rafael Orozco

Abstract. Many applications in science require that computational models and data be combined. In a Bayesian framework, this is usually done by defining likelihoods based on the mismatch of model outputs and data. However, matching model outputs and data in this way can be unnecessary or impossible. For example, using large amounts of steady state data is unnecessary because these data are redundant, it is numerically difficult to assimilate data in chaotic systems, and it is often impossible to assimilate data of a complex system into a low-dimensional model. These issues can be addressed by selecting features of the data, and defining likelihoods based on the features, rather than by the usual mismatch of model output and data. Our goal is to contribute to a fundamental understanding of such a feature-based approach that allows us to assimilate selected aspects of data into models. Specifically, we explain how the feature-based approach can be interpreted as a method for reducing an effective dimension, and derive new noise models, based on perturbed observations, that lead to computationally efficient solutions. Numerical implementations of our ideas are illustrated in four examples.


2019 ◽  
Vol 11 (16) ◽  
pp. 1874 ◽  
Author(s):  
Xing Chen ◽  
Tianzhu Yi ◽  
Feng He ◽  
Zhihua He ◽  
Zhen Dong

The high-resolution low frequency synthetic aperture radar (SAR) has serious range-azimuth phase coupling due to the large bandwidth and long integration time. High-resolution SAR processing methods are necessary for focusing the raw data of such radar. The generalized chirp scaling algorithm (GCSA) is generally accepted as an attractive solution to focus SAR systems with low frequency, large bandwidth and wide beam bandwidth. However, as the bandwidth and/or beamwidth increase, the serious phase coupling limits the performance of the current GCSA and degrades the imaging quality. The degradation is mainly caused by two reasons: the residual high-order coupling phase and the non-negligible error introduced by the linear approximation of stationary phase point using the principle of stationary phase (POSP). According to the characteristics of a high-resolution low frequency SAR signal, this paper firstly presents a principle to determine the required order of range frequency. After compensating for the range-independent coupling phase above 3rd order, an improved GCSA based on Lagrange inversion theorem is analytically derived. The Lagrange inversion enables the high-order range-dependent coupling phase to be accurately compensated. Imaging results of P- and L-band SAR data demonstrate the excellent performance of the proposed algorithm compared to the existing GCSA. The image quality and focusing depth in range dimension are greatly improved. The improved method provides the possibility to efficiently process high-resolution low frequency SAR data with wide swath.


2016 ◽  
Author(s):  
Constantijn J. Berends ◽  
Roderik S. W. van de Wal

Abstract. We present and evaluate several optimizations to a standard flood-fill algorithm in terms of computational efficiency. As an example, we determine the land/ocean-mask for a 1 km resolution digital elevation model (DEM) of North America and Greenland, a geographical area of roughly 7000 by 5000 km (roughly 35 million elements), about half of which is covered by ocean. Determining the land/ocean-mask with our improved flood-fill algorithm reduces computation time by 90 % relative to using a standard stack-based flood-fill algorithm. In another experiment, we use the bedrock elevation, ice thickness and geoid perturbation fields from the output of a coupled ice-sheet–sea-level equation model at 30,000 years before present and determine the extent of Lake Agassiz, using both the standard and improved versions of the flood-fill algorithm. We show that several optimizations to the flood-fill algorithm used for filling a depression up to a water level, that is not defined at forehand, decrease the computation time by up to 99 %. The resulting reduction in computation time allows determination of the extent and volume of depressions in a DEM over large geographical grids or repeatedly over long periods of time, where computation time might otherwise be a limiting factor.


2005 ◽  
Vol 44 (05) ◽  
pp. 674-686 ◽  
Author(s):  
B. Pfeifer ◽  
M. Seger ◽  
C. Hintermüller ◽  
F. Hanser ◽  
R. Modre ◽  
...  

Summary Objective: The computer model-based computation of the cardiac activation sequence in humans has been recently subject of successful clinical validation. This method is of potential interest for guiding ablation therapy of arrhythmogenic substrates. However, computation times of almost an hour are unattractive in a clinical setting. Thus, the objective is the development of a method which performs the computation in a few minutes run time. Methods: The computationally most expensive part is the product of the lead field matrix with a matrix containing the source pattern on the cardiac surface. The particular biophysical properties of both matrices are used for speeding up this operation by more than an order of magnitude. A conjugate gradient optimizer was developed using C++ for computing the activation map. Results: The software was tested on synthetic and clinical data. The increase in speed with respect to the previously used Fortran 77 implementation was a factor of 30 at a comparable quality of the results. As an additional finding the coupled regularization strategy, originally introduced for saving computation time, also reduced the sensitivity of the method to the choice of the regularization parameter. Conclusions: As it was shown for data from a WPW-patient the developed software can deliver diagnostically valuable information at a much shorter span of time than current clinical routine methods. Its main application could be the localization of focal arrhythmogenic substrates.


Geophysics ◽  
2002 ◽  
Vol 67 (1) ◽  
pp. 126-134 ◽  
Author(s):  
Frank Adler

Seismic imaging processes are, in general, formulated under the assumption of a correct macrovelocity model. However, seismic subsurface images are very sensitive to the accuracy of the macrovelocity model. This paper investigates how the output of Kirchhoff inversion/migration changes for perturbations of a given 3‐D laterally inhomogeneous macrovelocity model. The displacement of a reflector image point from a perturbation of the given velocity model is determined in a first‐order approximation by the corresponding traveltime and slowness perturbations as well as the matrix. of the Beylkin determinant. The required traveltime derivatives can be calculated with ray perturbation theory. Using this result, a new, computationally efficient Kirchhoff inversion/migration technique is developed to predict in parallel a series of subsurface images for perturbations of a given macrovelocity model during a single inversion/migration process applied to the unmigrated seismic data. These images are constructed by superposition of the seismic data at predicted image point locations which lie on surfaces that expand from the initial image point as a function of the velocity perturbation. Because of the analogy to Huygens wavefronts in wave propagation, the technique is called Kirchhoff image propagation. A 2‐D implementation of Kirchhoff image propagation requires about 1.2 times the computation time of a single migration to calculate a set of propagated images. The propagated images provide good approximations to remigrated images and are applied to migration velocity analysis.


2017 ◽  
Vol 19 (4) ◽  
pp. 493-506 ◽  
Author(s):  
Filippo Pecci ◽  
Edo Abraham ◽  
Ivan Stoianov

This paper presents a novel analysis of the accuracy of quadratic approximations for the Hazen–Williams (HW) head loss formula, which enables the control of constraint violations in optimisation problems for water supply networks. The two smooth polynomial approximations considered here minimise the absolute and relative errors, respectively, from the original non-smooth HW head loss function over a range of flows. Since quadratic approximations are used to formulate head loss constraints for different optimisation problems, we are interested in quantifying and controlling their absolute errors, which affect the degree of constraint violations of feasible candidate solutions. We derive new exact analytical formulae for the absolute errors as a function of the approximation domain, pipe roughness and relative error tolerance. We investigate the efficacy of the proposed quadratic approximations in mathematical optimisation problems for advanced pressure control in an operational water supply network. We propose a strategy on how to choose the approximation domain for each pipe such that the optimisation results are sufficiently close to the exact hydraulically feasible solution space. By using simulations with multiple parameters, the approximation errors are shown to be consistent with our analytical predictions.


Sign in / Sign up

Export Citation Format

Share Document