Alum Production from some Nigerian Kaolinite Deposits

Author(s):  
L.C. Edomwonyi-Otu ◽  
B.O. Aderemi ◽  
O. Edomwonyi-Otu ◽  
A. Simo ◽  
M. Maaza

The Development of Sustainable Processing Technologies for the Vast Mineral Resources Available in Nigeria and their Varied Applications Is a Major Pursuit by the Federal Ministry of Science and Technology. in this Work, Alum Was Produced from Three Different Kaolin Deposits in Nigeria Namely Kankara Brown, Bauchi and Kankara White by Acid Dealumination of the Metakaolin Obtained by Calcination of the Beneficiated Kaolinites and the Yields Were Measured to Ascertain the Process Repeatability. the Reproducibility Studies Carried Out on Samples from each Deposit Showed a Mean Yield of 80 %, 92 % and 87 % and Standard Deviation of 2.50 %, 1.063 % and 1.296 %, for Kankara Brown, Bauchi and Kankara White Respectively. the Values from the Three Deposits Fall within 3 Standard Deviations of the Mean in Accordance with the 68-95-99.7/three-Sigma Rule. the Alum Quality Also Compares Well with Available Commercial Alums in the Market. BET Analysis, of the Alumina Obtained by Calcination of the Alum (Kankara White), Gave a Surface Area of 192.2441m2/g Comparable to Commercial Alumina. these Results Suggest/establishes the Huge Possibility of Commercial Alum Production, Including Alumina, Using Kaolinite Clay from these Deposits as Starting Materials.

1974 ◽  
Vol 18 (2) ◽  
pp. 116-116
Author(s):  
Helmut T. Zwahlen

Twelve subjects (20–37 years old) were tested in the laboratory and eleven out of these were also tested in a car in the field, first under a no alcohol condition and then under an alcohol condition (approximately 0.10% BAC). In the laboratory the subjects simple and choice reaction times for two uncertainty modes were measured and their information processing rates (3 bits unsertainty) were determined. In the field the subjects driving skill for driving through a gap with 20 inches total clearance at 20 MPH was measured, as well as their static visual perceptual capabilities and risk acceptance decisions for a 46 feet viewing distance using psychophysical experimental methods. Based upon the driving skill measure (standard deviation of centerline deviations in the gap), the mean of the psychometric visual gap perception function and the mean of the psychometric gap risk acceptance function, the “Safety Distance” and the “Driver Safety Index” (DSI) were obtained. Based upon a statistical analysis of the data we may conclude first that the effects of alcohol (approximately 0.10% BAC) vary widely from one subject to another (slighthly improved performance to highly impaired performance) and that the changes in the group averages of the means and standard deviations of the psychometric visual perception and risk acceptance functions, the driving skill distributions, the “Safety Distances” and the DSI's for the subjects (although all changes in the group averages are in the expected direction) are statistically not significant (α = .05). Second, the group average of the means of the choice reaction times for the subjects increased by 5% under the alcohol condition (statistically significant, α = .05), but more important the group average of the standard deviations of the choice reaction times for the subjects increased by 23% (statistically significant, α = .05). The group average of the information processing rates for the subjects decreased by 3% (statistically not significant, α = .05) under the alcohol condition. A system model in which the system demands on the driver are represented in terms of choice reaction times is used to demonstrate that the increase in performance variability (expressed by the standard deviation of choice reaction times) under the influence of alcohol provides a much better explanation for the higher accident involvement than the historically most frequently used rather small increase in average performance (expressed by the mean of choice reaction times).


2007 ◽  
Vol 100 (1) ◽  
pp. 208-210 ◽  
Author(s):  
G. Steven Rhiel

In this research study is proof that the coefficient of variation ( CVhigh-low) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for dn, the standardized mean range, and an, an adjustment for bias in the range estimator of μ. These values are used in estimating the coefficient of variation from the range for skewed distributions. The dn and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the dn and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.


Author(s):  
Jordan Anaya

GRIMMER (Granularity-Related Inconsistency of Means Mapped to Error Repeats) builds upon the GRIM test and allows for testing whether reported measures of variability are mathematically possible. GRIMMER relies upon the statistical phenomenon that variances display a simple repetitive pattern when the data is discrete, i.e. granular. This observation allows for the generation of an algorithm that can quickly identify whether a reported statistic of any size or precision is consistent with the stated sample size and granularity. My implementation of the test is available at PrePubMed (http://www.prepubmed.org/grimmer) and currently allows for testing variances, standard deviations, and standard errors for integer data. It is possible to extend the test to other measures of variability such as deviation from the mean, or apply the test to non-integer data such as data reported to halves or tenths. The ability of the test to identify inconsistent statistics relies upon four factors: (1) the sample size; (2) the granularity of the data; (3) the precision (number of decimals) of the reported statistic; and (4) the size of the standard deviation or standard error (but not the variance). The test is most powerful when the sample size is small, the granularity is large, the statistic is reported to a large number of decimal places, and the standard deviation or standard error is small (variance is immune to size considerations). This test has important implications for any field that routinely reports statistics for granular data to at least two decimal places because it can help identify errors in publications, and should be used by journals during their initial screen of new submissions. The errors detected can be the result of anything from something as innocent as a typo or rounding error to large statistical mistakes or unfortunately even fraud. In this report I describe the mathematical foundations of the GRIMMER test and the algorithm I use to implement it.


1959 ◽  
Vol 5 (2) ◽  
pp. 119-126 ◽  
Author(s):  
Walton H Marsh ◽  
Benjamin Fingerhut ◽  
Elaine Kirsch

Abstract The alkaline phosphatase method of Kind and King was adapted to an automated recording colorimeter. The precision of the automated method (1 standard deviation as per cent of the mean value) was ±1.7 and for the manual method ±3.6 per cent. The color produced was proportional to the enzyme concentration by both methods, and recoveries of added phenol were satisfactory. In more than 150 serum specimens surveyed for enzyme activity, over 95 per cent of the results (2 standard deviations) of the 2 methods in the range 3.4-129 agree to within ±2.8 King-Armstrong units/1OO ml.


Blood ◽  
2008 ◽  
Vol 112 (11) ◽  
pp. 3969-3969
Author(s):  
Wasil Jastaniah ◽  
Mohammed Aseeri

Abstract Abstract: Standardizing Body Surface Area (BSA) determination is essential for avoiding variation in chemotherapy dosage calculations. In this study we compared variation in BSA calculation using weight and height by the Mosteller formula with weight alone using recently adapted table at the Princess Norah Oncology Center (PNOC). Methods: Cross-sectional study of pediatric oncology patients presenting to the pediatric oncology clinic at PNOC over a week period of time. Results: One hundred consecutive pediatric oncology patients presented to the clinic. The mean BSA calculated by the Mosteller formula was 0.83m2 (Standard Deviation = 0.24) and the mean BSA determined by the table (based on weight alone) was 0.82m2 (Standard Deviation = 0.25). The mean variation in dosing between the two methods was 1.64% (Standard Deviation = 3.4). Only 13 out of 100 patients (13%) had equal dosing using both methods and 21 out of 100 patients (21%) had dosing variation greater than 5%. When comparing both methods, using paired t-test, the difference was statistically significant (t(99) = 3.99 and P < 0.001). Conclusion: Significant differences in BSA-based chemotherapy dosing exist in our center. The Mosteller method should remain the standard until prospective studies are performed to determine the significance of this dosing variability on toxicity and survival outcome.


1963 ◽  
Vol 204 (1) ◽  
pp. 51-59
Author(s):  
Archie R. Tunturi

The standard deviations of the spontaneous electrical activity (SEA) of the suprasylvian gyrus (SSG) ranged between 57–131 µv and for the middle ectosylvian (MES) gyrus, 88–175 µv. Correlation coefficients, r, served to distinguish three regions of the SSG. The rostral showed low correlation with the middle, high correlation with the caudal, and low to negative correlation with the MES. The middle showed moderate correlation with the MES, and the caudal showed zero to negative correlation with the MES. Within the SSG, correlation was low and in the MES high, for spacings of 2 mm. Cocaine applied to both areas sharpened the boundaries at the sulci, reduced standard deviations, did not affect the correlation between the caudal SSG and the MES area, and increased r between all locations in the MES but not in the SSG. Cocaine on the SSG had no effect on the mean and standard deviation of the evoked potential in the MES, but decreased r of the SEA significantly.


2017 ◽  
Vol 74 (4) ◽  
pp. 989-1010 ◽  
Author(s):  
Björn Maronga ◽  
Joachim Reuder

Abstract Surface-layer-resolving large-eddy simulations (LESs) of free-convective to near-neutral boundary layers are used to study Monin–Obukhov similarity theory (MOST) functions. The LES dataset, previously used for the analysis of MOST relationships for structure parameters, is extended for the mean vertical gradients and standard deviations of potential temperature, specific humidity, and wind. Also, local-free-convection (LFC) similarity is studied. The LES data suggest that the MOST functions for mean gradients are universal and unique. The data for the mean gradient of the horizontal wind display significant scatter, while the gradients of temperature and humidity vary considerably less. The LES results suggest that this scatter is mostly related to a transition from MOST to LFC scaling when approaching free-convective conditions and that it is associated with a change of the slope of the similarity functions toward the expected value from LFC scaling. Overall, the data show slightly, but consistent, steeper slopes of the similarity functions than suggested in literature. The MOST functions for standard deviations appear to be unique and universal when the entrainment from the free atmosphere into the boundary layer is sufficiently small. If entrainment becomes significant, however, we find that the standard deviation of humidity no longer follows MOST. Under free-convective conditions, the similarity functions should reduce to universal constants (LFC scaling). This is supported by the LES data, showing only little scatter, but displaying a systematic height dependence of these constants. Like for MOST, the LFC similarity constant for the standard deviation of specific humidity becomes nonuniversal when the entrainment of dry air reaches significant levels.


1979 ◽  
Vol 49 (1) ◽  
pp. 297-298
Author(s):  
Nicholas John Carriero

In two previous studies the standard score transform used was based on a standard deviation derived from the mean standard deviations of the raw data in the six classifications employed in the studies rather than on one based on the entire set of raw scores as is the more common practice. The data were reanalyzed using the latter basis and major findings were confirmed. Some of the minor findings, however, were changed. These changes are pointed out and discussed where they occurred.


1990 ◽  
Vol 73 (5) ◽  
pp. 801-805 ◽  
Author(s):  
Frank E Mcdonough ◽  
Fred H Steinke ◽  
Ghulam Sarwar ◽  
Bjorn O Eggum ◽  
Ricardo Bressani ◽  
...  

Abstract Eight laboratories participated In a collaborative study to estimate precision of a standardized rat assay for determining true protein digestibility In selected animal, fish, and cereal products. Each of 7 test protein sources (casein, tuna fish, macaroni/cheese, pea protein concentrate, rolled oats, pinto beans, and nonfat dried milk) was fed as the sole source of protein at a 10% protein level in mixed diets. Each diet was fed to 2 replicate groups of 4 rats each for a 4-day acclimation period and a 5-day balance period. Mean digestibilities ranged from 98.6% for casein to 72.6% for pinto beans. Repeatability standard deviations ranged from 0.5 to 2.0%; the mean relative standard deviation for repeatability was 0.9% (range 0.5-2.8%). Reproducibility standard deviations ranged from 1.2 to 3.2%, and the mean relative standard deviation for reproducibility was 2.4% (range 1.3- 4.4%). The method has been approved interim official first action for determining true protein digestibility In foods and ingredients


2018 ◽  
Vol 4 ◽  
pp. 38
Author(s):  
Sébastien Lahaye

Nuclear data evaluation files in the ENDF6 format provide mean values and associated uncertainties for physical quantities relevant in nuclear physics. Uncertainties are denoted as Δ in the format description, and are commonly understood as standard deviations. Uncertainties can be completed by covariance matrices. The evaluations do not provide any indication on the probability density function to be used when sampling. Three constraints must be observed: the mean value, the standard deviation and the positivity of the physical quantity. MENDEL code generally uses positively truncated Gaussian distribution laws for small relative standard deviations and a lognormal law for larger uncertainty levels (>50%). Indeed, the use of truncated Gaussian laws can modify the mean and standard deviation value. In this paper, we will make explicit the error in the mean value and the standard deviation when using different types of distribution laws. We also employ the principle of maximum entropy as a criterion to choose among the truncated Gaussian, the fitted Gaussian and the lognormal distribution. Remarkably, the difference in terms of entropy between the candidate distribution laws is a function of the relative standard deviation only. The obtained results provide therefore general guidance for the choice among these distributions.


Sign in / Sign up

Export Citation Format

Share Document