Clear-Water Abutment Scour Prediction for Simple and Complex Channels

Author(s):  
J.R. Richardson ◽  
Robert Trivino

One hundred and sixty-one laboratory and one field clear-water abutment scour data sets were regressed using a Box-Tidwell power transformation procedure. The predictive equation was verified using additional laboratory data for compound channels. A momentum ratio term was used to account for flow redistribution, differences in overbank geometries, and scale sizes. The regression identified the most applicable and dominant independent variables that affect the magnitude of clear-water abutment scour. The resulting equation has a significantly lower standard error of estimate than previously published equations. This formulation is more robust and accurately predicted abutment scour depth at both laboratory and prototype scale. With additional refinement, improved abutment scour predictive equations can be realized by using alternative regression procedures and including the momentum ratio to buffer the effect of abutment length at large prototype scales.

Minerals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Valérie Laperche ◽  
Bruno Lemière

Portable X-ray fluorescence spectroscopy is now widely used in almost any field of geoscience. Handheld XRF analysers are easy to use, and results are available in almost real time anywhere. However, the results do not always match laboratory analyses, and this may deter users. Rather than analytical issues, the bias often results from sample preparation differences. Instrument setup and analysis conditions need to be fully understood to avoid reporting erroneous results. The technique’s limitations must be kept in mind. We describe a number of issues and potential pitfalls observed from our experience and described in the literature. This includes the analytical mode and parameters; protective films; sample geometry and density, especially for light elements; analytical interferences between elements; physical effects of the matrix and sample condition, and more. Nevertheless, portable X-ray fluorescence spectroscopy (pXRF) results gathered with sufficient care by experienced users are both precise and reliable, if not fully accurate, and they can constitute robust data sets. Rather than being a substitute for laboratory analyses, pXRF measurements are a valuable complement to those. pXRF improves the quality and relevance of laboratory data sets.


2018 ◽  
Vol 11 (11) ◽  
pp. 6203-6230 ◽  
Author(s):  
Simon Ruske ◽  
David O. Topping ◽  
Virginia E. Foot ◽  
Andrew P. Morse ◽  
Martin W. Gallagher

Abstract. Primary biological aerosol including bacteria, fungal spores and pollen have important implications for public health and the environment. Such particles may have different concentrations of chemical fluorophores and will respond differently in the presence of ultraviolet light, potentially allowing for different types of biological aerosol to be discriminated. Development of ultraviolet light induced fluorescence (UV-LIF) instruments such as the Wideband Integrated Bioaerosol Sensor (WIBS) has allowed for size, morphology and fluorescence measurements to be collected in real-time. However, it is unclear without studying instrument responses in the laboratory, the extent to which different types of particles can be discriminated. Collection of laboratory data is vital to validate any approach used to analyse data and ensure that the data available is utilized as effectively as possible. In this paper a variety of methodologies are tested on a range of particles collected in the laboratory. Hierarchical agglomerative clustering (HAC) has been previously applied to UV-LIF data in a number of studies and is tested alongside other algorithms that could be used to solve the classification problem: Density Based Spectral Clustering and Noise (DBSCAN), k-means and gradient boosting. Whilst HAC was able to effectively discriminate between reference narrow-size distribution PSL particles, yielding a classification error of only 1.8 %, similar results were not obtained when testing on laboratory generated aerosol where the classification error was found to be between 11.5 % and 24.2 %. Furthermore, there is a large uncertainty in this approach in terms of the data preparation and the cluster index used, and we were unable to attain consistent results across the different sets of laboratory generated aerosol tested. The lowest classification errors were obtained using gradient boosting, where the misclassification rate was between 4.38 % and 5.42 %. The largest contribution to the error, in the case of the higher misclassification rate, was the pollen samples where 28.5 % of the samples were incorrectly classified as fungal spores. The technique was robust to changes in data preparation provided a fluorescent threshold was applied to the data. In the event that laboratory training data are unavailable, DBSCAN was found to be a potential alternative to HAC. In the case of one of the data sets where 22.9 % of the data were left unclassified we were able to produce three distinct clusters obtaining a classification error of only 1.42 % on the classified data. These results could not be replicated for the other data set where 26.8 % of the data were not classified and a classification error of 13.8 % was obtained. This method, like HAC, also appeared to be heavily dependent on data preparation, requiring a different selection of parameters depending on the preparation used. Further analysis will also be required to confirm our selection of the parameters when using this method on ambient data. There is a clear need for the collection of additional laboratory generated aerosol to improve interpretation of current databases and to aid in the analysis of data collected from an ambient environment. New instruments with a greater resolution are likely to improve on current discrimination between pollen, bacteria and fungal spores and even between different species, however the need for extensive laboratory data sets will grow as a result.


Author(s):  
Mohamed E. Mead ◽  
Gauss M. Cordeiro ◽  
Ahmed Z. Afify ◽  
Hazem Al Mofleh

Mahdavi A. and Kundu D. (2017) introduced a family for generating univariate distributions called the alpha power transformation. They studied as a special case the properties of the alpha power transformed exponential distribution. We provide some mathematical properties of this distribution and define a four-parameter lifetime model called the alpha power exponentiated Weibull distribution. It generalizes some well-known lifetime models such as the exponentiated exponential, exponentiated Rayleigh, exponentiated Weibull and Weibull distributions. The importance of the new distribution comes from its ability to model monotone and non-monotone failure rate functions, which are quite common in reliability studies. We derive some basic properties of the proposed distribution including quantile and generating functions, moments and order statistics. The maximum likelihood method is used to estimate the model parameters. Simulation results investigate the performance of the estimates. We illustrate the importance of the proposed distribution over the McDonald Weibull, beta Weibull, modified Weibull, transmuted Weibull and exponentiated Weibull distributions by means of two real data sets.


Geophysics ◽  
1986 ◽  
Vol 51 (3) ◽  
pp. 780-787 ◽  
Author(s):  
Kai Hsu ◽  
Arthur B. Baggeroer

Modern digital sonic tools can record full waveforms using an array of receivers. The recorded waveforms are extremely complicated because wave components overlap in time. Conventional beamforming approaches, such as semblance processing, while robust, sometimes do not resolve the interfering wave components propagating at similar speeds, such as multiple compressional arrivals due to the formation alteration surrounding the borehole. Here the maximum‐likelihood method (MLM), a high‐resolution array processing algorithm, is modified and applied to process borehole array sonic data. Extensive modifications of the original MLM algorithm were necessary because of the transient character of the sonic data and its effect upon the spectral covariance matrix. We applied MLM to several array sonic data sets, including laboratory data, synthetic waveforms, and field data taken by a Schlumberger array sonic tool. MLM’s slowness resolution is demonstrated in resolving a secondary compressional arrival from the primary compressional arrival in an altered formation, and the formation compressional arrival in the presence of a stronger casing arrival in an unbonded cased hole. In comparison with the semblance processing results, the MLM results clearly show a better slowness resolution. However, in the case of a weak formation arrival, the semblance processing tends to enhance and resolve the weak arrival by the semblance normalization procedure, while the MLM, designed to estimate the signal strength, does not. The heavy computational requirement (mainly, many matrix inversions) in the MLM makes it much slower than semblance processing, which may prohibit implementation of the MLM algorithm in a real‐time environment.


2014 ◽  
Vol 9 (3) ◽  
pp. 331-343 ◽  
Author(s):  
N. Ahmad ◽  
T. Mohamed ◽  
F. H. Ali ◽  
B. Yusuf

Laboratory data for local scour depth regarding the size of wide piers are presented. Clear water scour tests were performed for various pier widths (0.06, 0.076, 0.102, 0.14 and 0.165 m), two types of pier shapes (circular and rectangular) and two types of uniform cohesionless bed sediment (d50 = 0.23 and d50 = 0.80 mm). New data are presented and used to demonstrate the effects of pier width, pier shape and sediment size on scour depth. The influence of equilibrium time (te) on scouring processes is also discussed. Equilibrium scour depths were found to decrease with increasing values of b/d50. The temporal development of equilibrium local scour depth with new laboratory data is demonstrated for flow intensity V/Vc = 0.95. On the other hand, the results of scour mechanism have shown a significant relationship between normalized volume of scoured and deposited with pier width, b. The experimental data obtained in this study and data available from the literature for wide piers are used to evaluate predictions of existing methods.


2013 ◽  
Vol 67 (5) ◽  
pp. 1121-1128 ◽  
Author(s):  
Mohammad Najafzadeh ◽  
Gholam-Abbas Barani ◽  
Masoud Reza Hessami Kermani

In the present study, the Group Method of Data Handling (GMDH) network has been utilized to predict abutments scour depth for both clear-water and live-bed conditions. The GMDH network was developed using a Back Propagation algorithm (BP). Input parameters that were considered as effective variables on abutment scour depth included properties of sediment size, geometry of bridge abutments, and properties of approaching flow. Training and testing performances of the GMDH network were carried out using dimensionless parameters that were collected from the literature. The testing results were compared with those obtained using the Support Vector Machines (SVM) model and the traditional equations. The GMDH network predicted the abutment scour depth with lower error (RMSE (root mean square error) = 0.29 and MAPE (mean absolute percentage of error) = 0.99) and higher (R = 0.98) accuracy than those performed using the SVM model and the traditional equations.


Hereditas ◽  
2019 ◽  
Vol 156 (1) ◽  
Author(s):  
T. H. Noel Ellis ◽  
Julie M. I. Hofer ◽  
Martin T. Swain ◽  
Peter J. van Dijk

Abstract A controversy arose over Mendel’s pea crossing experiments after the statistician R.A. Fisher proposed how these may have been performed and criticised Mendel’s interpretation of his data. Here we re-examine Mendel’s experiments and investigate Fisher’s statistical criticisms of bias. We describe pea varieties available in Mendel’s time and show that these could readily provide all the material Mendel needed for his experiments; the characters he chose to follow were clearly described in catalogues at the time. The combination of character states available in these varieties, together with Eichling’s report of crosses Mendel performed, suggest that two of his F3 progeny test experiments may have involved the same F2 population, and therefore that these data should not be treated as independent variables in statistical analysis of Mendel’s data. A comprehensive re-examination of Mendel’s segregation ratios does not support previous suggestions that they differ remarkably from expectation. The χ2 values for his segregation ratios sum to a value close to the expectation and there is no deficiency of extreme segregation ratios. Overall the χ values for Mendel’s segregation ratios deviate slightly from the standard normal distribution; this is probably because of the variance associated with phenotypic rather than genotypic ratios and because Mendel excluded some data sets with small numbers of progeny, where he noted the ratios “deviate not insignificantly” from expectation.


1987 ◽  
Vol 31 (7) ◽  
pp. 811-814
Author(s):  
Valerie J. Gawron ◽  
David J. Travale ◽  
Colin Drury ◽  
Sara Czaja

A major problem facing system designers today is predicting human performance in: 1) systems that have not yet been built, 2) situations that have not yet been experienced, and 3) situations for which there are only anecdotal reports. To address this problem, the Human Performance Expert System (Human) was designed. The system contains a large data base of equations derived from human performance research reported in the open literature. Human accesses these data to predict task performance times, task completion probabilities, and error rates. A problem was encountered when multiple independent data sets were relevant to one task. For example, a designer is interested in the effects of luminance and front size on number of reading errors. Two data sets exist in the literature: one examining the effects of luminance, the other, font size. The data in the two sets were collected at different locations with different subjects and at different times in history. How can the two data sets be combined to address the designer's problem? Four combining algorithms were developed and then tested in two steps. In step one, two reaction-time experiments were conducted: one to evaluate the effect the number of alternatives on reaction time; the second, signals per minute and number of displays being monitored. The four algorithms were used on the data from these two experiments to predict reaction time in the situation where all three independent variables are manipulated simultaneously. In step two of the test procedure, a third experiment was conducted. Subjects who had not participated in either Experiment One or Two performed a reaction-time task under the combined effects of all three independent variables. The predictions made from step one were compared to the actual empirical data collected in step two. The results of these comparisons are presented.


2021 ◽  
Author(s):  
Kun Wang ◽  
Christopher Johnson ◽  
Kane Bennett ◽  
Paul Johnson

Abstract Data-driven machine-learning for predicting instantaneous and future fault-slip in laboratory experiments has recently progressed markedly due to large training data sets. In Earth however, earthquake interevent times range from 10's-100's of years and geophysical data typically exist for only a portion of an earthquake cycle. Sparse data presents a serious challenge to training machine learning models. Here we describe a transfer learning approach using numerical simulations to train a convolutional encoder-decoder that predicts fault-slip behavior in laboratory experiments. The model learns a mapping between acoustic emission histories and fault-slip from numerical simulations, and generalizes to produce accurate results using laboratory data. Notably slip-predictions markedly improve using the simulation-data trained-model and training the latent space using a portion of a single laboratory earthquake-cycle. The transfer learning results elucidate the potential of using models trained on numerical simulations and fine-tuned with small geophysical data sets for potential applications to faults in Earth.


Sign in / Sign up

Export Citation Format

Share Document