scholarly journals Model for Predicting and Analyzing the %Fe Removed as Deleterious Element from Al-Si alloys During Fe Removal Processing with Mn

2016 ◽  
Vol 2 (6) ◽  
pp. 125
Author(s):  
F. A. Anene ◽  
N. E. Nwankwo

Model equation for predicting and analysing %Fe removed from a eutectic Al-Si alloy during Fe removal processing with Mn under controlled conditions has been derived and validated. The model derived;%Fe removed = +1.30700 + 0.095882* Al-Si + %Mn - 0.068512* Al-Si + %Mn^2 + 9.77000E-003* Al-Si+%Mn^3 ,is found to predict the %Fe removed from Al-Si alloy as a cubic function of Mn content. With a standard deviation of 0.010. Analysed results obtained show that %Fe removal equation with Mn addition has been validly derived. The model " R-Squared" of 0.9814 is found to be in agreement with the "Adj R-Squared" of 0.9628; the difference being less than 0.2. The derived model equation gives a reasonable forecast of %Fe removed very close to the values obtained from the experiments. The close proximity of both model and experimental result values is attributed to the low standard error of the model coefficients. The processing parameters are process temperature, alloy contents and holding time.

Radiocarbon ◽  
1966 ◽  
Vol 8 ◽  
pp. 340-347 ◽  
Author(s):  
W. J. Callow ◽  
M. J. Baker ◽  
Geraldine I. Hassall

The following list comprises measurements made since those reported in NPL III and is complete to the end of November 1965.Ages are relative to A.D. 1950 and are calculated using a half-life of 5568 yr. The measurements, corrected for fractionation (quoted δC13 values are relative to the P.D.B. standard), are referred to 0.950 times the activity of the NBS oxalic acid as contemporary reference standard. The quoted uncertainty is one standard deviation derived from a proper combination of the parameter variances as described in detail in NPL III. These variances are those of the standard and background measurements over a rolling twenty week period, of the sample δC14 and δC13 measurements and of the de Vries effect (assumed to add an additional uncertainty equivalent to a standard deviation of 80 yr). Any uncertainty in the half-life has been excluded so that relative C14 ages may be correctly compared. Absolute age assessments, however, should be made using the accepted best value for the half-life and the appropriate uncertainty then included. If the net sample count rate is less than 4 times the standard error of the difference between the sample and background count rates, a lower limit to the age is reported corresponding to a net sample count rate of 4 times the standard error of this difference.


Radiocarbon ◽  
1964 ◽  
Vol 6 ◽  
pp. 25-30 ◽  
Author(s):  
W. J. Callow ◽  
M. J. Baker ◽  
Daphne H. Pritchard

The following list comprises measurements made since those reported in NPL I and is complete to the end of November 1963.Ages are relative to a.d. 1950 and are calculated using a half-life of 5568 yr. The measurements have been corrected for fractionation and referred to 0.950 times the activity of the NBS oxalic acid as a contemporary reference standard. The quoted uncertainty is one standard deviation derived from a proper combination of the parameter variances, viz. those of the standard and background measurements over a rolling twenty-week period, of the sample measurements from at least three independent fillings, of the δC13 measurements and of the de Vries effect (assumed to add an additional uncertainty equivalent to a standard deviation of 80 yr). Any uncertainty in the half-life has been excluded so that relative C14 ages may be correctly compared. Absolute age assessments, however, should be made using the accepted best value for the half-life and the appropriate uncertainty included. If the net sample activity is less than 4 times the standard error of the difference between the sample and background activities, a lower limit to the age is reported equivalent to a sample activity of 4 times the standard error of this difference.The description of each sample is based on information provided by the person submitting the sample to the Laboratory.The work reported forms part of the research programme of the Laboratory and is published by permission of the Director.


2000 ◽  
Vol 6 (3) ◽  
pp. 364-364 ◽  
Author(s):  
NANCY R. TEMKIN ◽  
ROBERT K. HEATON ◽  
IGOR GRANT ◽  
SUREYYA S. DIKMEN

Hinton-Bayre (2000) raises a point that may occur to many readers who are familiar with the Reliable Change Index (RCI). In our previous paper comparing four models for detecting significant change in neuropsychological performance (Temkin et al., 1999), we used a formula for calculating Sdiff, the measure of variability for the test–retest difference, that differs from the one Hinton-Bayre has seen employed in other studies of the RCI. In fact, there are two ways of calculating Sdiff—a direct method and an approximate method. As stated by Jacobson and Truax (1991, p. 14), the direct method is to compute “the standard error of the difference between the two test scores” or equivalently [begin square root](s12 + s22 − 2s1s2rxx′)[end square root] where si is the standard deviation at time i and rxx′ is the test–retest correlation or reliability coefficient. Jacobson and Truax also provide a formula for the approximation of Sdiff when one does not have access to retest data on the population of interest, but does have a test–retest reliability coefficient and an estimate of the cross-sectional standard deviation, i.e., the standard deviation at a single point in time. This approximation assumes that the standard deviations at Time 1 and Time 2 are equal, which may be close to true in many cases. Since we had the longitudinal data to directly calculate the standard error of the difference between scores at Time 1 and Time 2, we used the direct method. Which method is preferable? When the needed data are available, it is the one we used.


2021 ◽  
Vol 12 (03) ◽  
pp. 39-44
Author(s):  
Anik Maryani ◽  
Fahmy Fachrezzy ◽  
Ramdan Pelana

This study aims to determine the effectiveness of the effect of aerobic mix impact and SKJ 2000 version (core exercise) to improve physical fitness in female students. The research was conducted at SMEA YASMA Sudirman Cijantung for 8 weeks with 24 meetings. The method used is an experimental method with a pre and post-test design. The sampling technique was random sampling from a total of 40 grade 1 students and 30 samples were taken. The data collection technique used was a physical fitness test using the Indonesian Physical Fitness Test (TKJI). Hypothesis testing uses the t-test at the significant level (α) 0.05. The results showed that the difference between the mean value of the initial test (x) and the final test (y) in the mixed impact aerobic exercise group was obtained = -6.47; the value of the standard deviation of the difference = 1,2; the standard error value of the mean difference = 0.32; and the value becomes = -20,2. The initial test (x) and the final test (y) in the 2000 version of the Physical Fitness exercise obtained the difference in the mean value is = -5; the value of the standard deviation of the difference = 1.1; the standard error value of the mean difference = 0.29; and the value becomes = -17.24. The final test of the mixed impact aerobic exercise group (x) and the final test of the aerobic exercise group (y) version 2000, obtained the mean value of the variable x = 19.33; variable value y = 17; the standard deviation value x = 1.48; standard deviation of the variable y = 2.31; standard error variable x = 0.4; standard error for the variable y = 0.62; standard error for the mean difference between x and variable = 0.74; Hypothesis test results obtained t observation = 3.15, at 28 degrees of freedom and a significant level (α) 0.05, the value of t table = 2.048 is obtained. The conclusion of the study is that the effect of mix impact aerobic exercise is more effective in improving physical fitness compared to those using the 2000 version of the fitness gymnastics version of aerobic exercise.


2008 ◽  
Vol 5 (10) ◽  
Author(s):  
Abdur Rehman Khaleeq

The main purpose of this study was to investigate the views of students regarding motivational techniques used by the heads of secondary schools in Punjab.  The main objective of this study was to identify the students’ opinions about the performance of the teachers.  The nature of this study was descriptive, and the population constituted 748,124 students at the secondary level in Punjab. Fifteen out of 34 districts in the province of Punjab were randomly selected and cluster sampling was used in order to choose 300 secondary schools for the sample. Ten students from each school were included in this sample, and it was further divided equally into male and female as well as urban and rural. The questionnaires were the research instruments for data collection. The data were tabulated, analyzed, and interpreted by using the most suitable statistical tools like mean, standard deviation, standard error of the difference between means, and two-tailed t-test. On the basis of analysis, it was concluded that a greater part of students fairly agreed that their teachers appreciated the students’ performances openly, criticized them in a constructive way, maintained discipline in the class, chided the students on their mistakes, gave feedback in academic matter, solved their problem, and trusted in the students.  


2020 ◽  
Vol 42 (4) ◽  
pp. 409-410
Author(s):  
Chittaranjan Andrade

Many authors are unsure of whether to present the mean along with the standard deviation (SD) or along with the standard error of the mean (SEM). The SD is a descriptive statistic that estimates the scatter of values around the sample mean; hence, the SD describes the sample. In contrast, the SEM is an estimate of how close the sample mean is to the population mean; it is an intermediate term in the calculation of the 95% confidence interval around the mean, and (where applicable) statistical significance; the SEM does not describe the sample. Therefore, the mean should always be accompanied by the SD when describing the sample. There are many reasons why the SEM continues to be reported, and it is argued that none of these is justifiable. In fact, presentation of SEMs may mislead readers into believing that the sample data are more precise than they actually are. Given that the standard error is not presented for other parameters, such as difference between means or proportions, and difference between proportions, it is suggested that presentation of SEM values can be done away with, altogether.


Micromachines ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 581
Author(s):  
Kai Li ◽  
Zhenyu Zhao ◽  
Houming Zhou ◽  
Hao Zhou ◽  
Jie Yin ◽  
...  

As a surface finishing technique for rapid remelting and re-solidification, laser polishing can effectively eliminate the asperities so as to approach the feature size. Nevertheless, the polished surface quality is significantly sensitive to the processing parameters, especially with respect to melt hydrodynamics. In this paper, a transient two-dimensional model was developed to demonstrate the molten flow behavior for different surface morphologies of the Ti6Al4V alloy. It is illustrated that the complex evolution of the melt hydrodynamics involving heat conduction, thermal convection, thermal radiation, melting and solidification during laser polishing. Results show that the uniformity of the distribution of surface peaks and valleys can improve the molten flow stability and obtain better smoothing effect. The high cooling rate of the molten pool resulting in a shortening of the molten lifetime, which prevents the peaks from being removed by capillary and thermocapillary forces. It is revealed that the mechanism of secondary roughness formation on polished surface. Moreover, the double spiral nest Marangoni convection extrudes the molten to the outsides. It results in the formation of expansion and depression, corresponding to nearby the starting position and at the edges of the polished surface. It is further found that the difference between the simulation and experimental depression depths is only about 2 μm. Correspondingly, the errors are approximately 8.3%, 14.3% and 13.3%, corresponding to Models 1, 2 and 3, respectively. The aforementioned results illustrated that the predicted surface profiles agree reasonably well with the experimentally measured surface height data.


1. It is widely felt that any method of rejecting observations with large deviations from the mean is open to some suspicion. Suppose that by some criterion, such as Peirce’s and Chauvenet’s, we decide to reject observations with deviations greater than 4 σ, where σ is the standard error, computed from the standard deviation by the usual rule; then we reject an observation deviating by 4·5 σ, and thereby alter the mean by about 4·5 σ/ n , where n is the number of observations, and at the same time we reduce the computed standard error. This may lead to the rejection of another observation deviating from the original mean by less than 4 σ, and if the process is repeated the mean may be shifted so much as to lead to doubt as to whether it is really sufficiently representative of the observations. In many cases, where we suspect that some abnormal cause has affected a fraction of the observations, there is a legitimate doubt as to whether it has affected a particular observation. Suppose that we have 50 observations. Then there is an even chance, according to the normal law, of a deviation exceeding 2·33 σ. But a deviation of 3 σ or more is not impossible, and if we make a mistake in rejecting it the mean of the remainder is not the most probable value. On the other hand, an observation deviating by only 2 σ may be affected by an abnormal cause of error, and then we should err in retaining it, even though no existing rule will instruct us to reject such an observation. It seems clear that the probability that a given observation has been affected by an abnormal cause of error is a continuous function of the deviation; it is never certain or impossible that it has been so affected, and a process that completely rejects certain observations, while retaining with full weight others with comparable deviations, possibly in the opposite direction, is unsatisfactory in principle.


2014 ◽  
Vol 953-954 ◽  
pp. 1035-1039
Author(s):  
Li Qun Wang ◽  
Zhong Bo Yi ◽  
Zhong Xiang Wei

Aimed at improving the utilization of pulverized coal, high-temperature heat pipe technology was introduced into lignite carbonization.Under the design of power of 10kw semi-industrial pulverized coal carbonization test equipment, Fugu lignite coal as raw material to investigate the operating characteristics of the device and carbonization characteristics. Experimental result shows that the high temperature heat pipes heat steadily and meet the temperature requirement of low-temperature carbonization. With the extension of the holding time, the semi-coke fixed carbon content increasing, but volatile matter vice versa, however, holding time above 60 minutes, the effect of carbonization is not obvious, and the best carbonization time is 30 ~ 60 minutes. The length of the holding time has little effect on gas composition, the content of H2 and CH4 are relatively higher than the rest gas, (H2 + CH4) gas accounted for 70% of the total, the heating value remains at 18.76 ~ 19.22MJ/m3, belongs to medium-high value gas, could provide for industrial and civilian use.


Sign in / Sign up

Export Citation Format

Share Document