scholarly journals Use of the F test for determining the degree of enzyme-kinetic and ligand-binding data. A Monte Carlo simulation study

1983 ◽  
Vol 211 (1) ◽  
pp. 23-34 ◽  
Author(s):  
F J Burguillo ◽  
A J Wright ◽  
W G Bardsley

1. Initial-rate data were simulated for 13 representative enzyme mechanisms with the use of several distributions of rate constants in order to locate conditions leading to v([S]) curves in physiological ranges of substrate concentration. 2. In all, 420 sets of such v([S]) curves were generated with the use of several choices of substrate concentration range (two, three or four orders of magnitude), number of experimental points (10, 15 or 20), error on v (5-10%) and standard deviation on v (5-9%) in order to simulate experimental results in a number of possible ways. 3. Curve-fitting was carried out to rational functions of degree 1:1, 2:2, …, 5:5 until there was no statistically significant decrease in the sum of weighted squared residuals as judged by the F test at 95% and 99% confidence levels. 4. It was checked whether the non-linear regression program had located a good minimum in the sum of squares by also fitting the data with the correct values of parameters as starting estimates. 5. A similar procedure was adopted with 110 sets of binding data simulated for 11 models, and the F test was used to see if fractional-saturation data generated by a binding polynomial of order n could be adequately fitted by one of order m, m less than n. 6. From the 530 simulations the F test was successful in fixing the correct degree with a probability of 0.62 at the 95% confidence level, but this fell with increase in degree as follows: 1:1 (0.98), 2:2 (0.71), 3:3 (0.43) and 4:4 (0.34), the first numbers being the degree of the rate equation and those in parentheses referring to the 95% confidence level. 7. It made little difference whether the 95% or the 99% confidence level was consulted, as there were very few borderline cases. 8. The chance of detecting deviations from Michaelis-Menten kinetics, i.e. terms of at least second-order in a rate equation of degree n:n, n greater than 1, was estimated to be about 0.8. 9. The probability of the F test leading to a spurious result due to error in the data was found to be about 0.04. 10. The probability with which 4:4 mechanisms can lead to v([S]) plots with no, one, two or three turning points was computed, and it was established that there is a small but finite chance that the increase in degree that occurs in some mechanisms when ES in equilibrium EP interconversions are explicitly allowed for can be detected by the F test.

2017 ◽  
Vol 5 (2) ◽  
pp. 121
Author(s):  
Mazen Doumani ◽  
Adnan Habib ◽  
Abrar Alhababi ◽  
Ahmad Bashnakli ◽  
Enass Shamsy ◽  
...  

Self-confidence level assessment in newly graduated students is very important to evaluate the undergraduate endodontic courses. Objective: The aim of this study was to get information from internship dentists in Alfarabi dental college related to their confidence levels during root canal treatment procedures.Methods: Anonymous survey forms were sent to 150 internship dentists in Alfarbi dental college. They were asked to indicate their self-confidence level by Lickert’s scoring system ranging between 1 and 5.Results: Removal of broken instruments was determined as a procedure that was not experienced by 25.2% of the dentists. (44.6%) of dentists felt confident about taking radiographs during root canal treatment. 1.9 % of them reported as having very little confidence during retreatment. The irrigation was a procedure in which they felt very confident about (59.2%).Conclusion: The non-practiced endodontic procedure was clearly related to levels of self confidence among internship dentists; this means; a lot of studies in dental school should be performed to determine the weakness points or gaps in undergraduate endodontic courses.


2020 ◽  
Author(s):  
Cristina Lopes ◽  
João Velez

<p>For years, diatom-based biostratigraphy has been settings bio-events based on a qualitatively approach. This means that the biostratigraphy would set an age based on the findings or not of a certain species. However, how many species are needed to consider a certain datum as certain? One, ten, 100? Moreover, each biostratigrapher sets its on limits. One might consider one as enough and another 10. Therefore, the scale more often used is the absent, rare, frequent, common, dominant or abundant with an explanation of what of these definitions mean. This is very common in, for example, IODP expeditions.</p><p>However, what would happen to these biostratigraphy levels if one would apply, for example, a concept of 95% confidence level? Moreover, what would happen to an age model if this concept would be applied to all the biostratigraphy microfossil?</p><p>Here we will show Expedition 346 age model differences with and without confidence levels applied to diatoms. The differences can be significant and even considering the existence of a hiatus can be reconsider if confidence limits are applied, turning a possible hiatus into a very slow sedimentation rate having serious implications to the initial paleoceanographic interpretations.</p>


Author(s):  
X. Jin ◽  
P. Woytowitz ◽  
T. Tan

The reliability performance of Semiconductor Manufacturing Equipments (SME) is very important for both equipment manufacturers and customers. However, the response variables are random in nature and can significantly change due to many factors. In order to track the equipment reliability performance with certain confidence, this paper proposes an efficient methodology to calculate the number of samples needed to measure the reliability performance of the SME tools. This paper presents a frequency-based Statistics methodology to calculate the number of sampled tools to evaluate the SME reliability field performance based on certain confidence levels and error margins. One example case has been investigated to demonstrate the method. We demonstrate that the multiple weeks accumulated average reliability metrics of multiple tools do not equal the average of the multiple weeks accumulated average reliability metrics of these tools. We show how the number of required sampled tools increases when the reliability performance is improved and quantify the larger number of sampled tools required when a tighter margin of error or higher confidence level is needed.


2017 ◽  
Vol 2017 ◽  
pp. 1-9
Author(s):  
Xiao-Lei Wang ◽  
Da-Gang Lu

The mean seismic probability risk model has widely been used in seismic design and safety evaluation of critical infrastructures. In this paper, the confidence levels analysis and error equations derivation of the mean seismic probability risk model are conducted. It has been found that the confidence levels and error values of the mean seismic probability risk model are changed for different sites and that the confidence levels are low and the error values are large for most sites. Meanwhile, the confidence levels of ASCE/SEI 43-05 design parameters are analyzed and the error equation of achieved performance probabilities based on ASCE/SEI 43-05 is also obtained. It is found that the confidence levels for design results obtained using ASCE/SEI 43-05 criteria are not high, which are less than 95%, while the high confidence level of the uniform risk could not be achieved using ASCE/SEI 43-05 criteria and the error values between risk model with target confidence level and mean risk model using ASCE/SEI 43-05 criteria are large for some sites. It is suggested that the seismic risk model considering high confidence levels instead of the mean seismic probability risk model should be used in the future.


2009 ◽  
Vol 54 (183) ◽  
pp. 119-138 ◽  
Author(s):  
Milica Obadovic ◽  
Mirjana Obadovic

This paper presents market risk evaluation for a portfolio consisting of shares that are continuously traded on the Belgrade Stock Exchange, by applying the Value-at-Risk model - the analytical method. It describes the manner of analytical method application and compares the results obtained by implementing this method at different confidence levels. Method verification was carried out on the basis of the failure rate that demonstrated the confidence level for which this method was acceptable in view of the given conditions.


1987 ◽  
Vol 40 (3) ◽  
pp. 423 ◽  
Author(s):  
RW Clay

An examination is made of published data on cosmic ray anisotropy at energies above about 1015 eV. Both amplitude and phase results are examined in an attempt to assess the confidence which can be placed in the observations as a whole. It is found that whilst many published results individually may suggest quite high confidence levels of real measured anisotropy, the data taken as a whole are less convincing. Some internal consistency in the phase results suggests that a real effect may have been measured but, again, this is not at a high confidence level.


2020 ◽  
Vol 9 (1) ◽  
Author(s):  
Yahya - Yahya ◽  
Zenitha - Maulida ◽  
Yusra - Yusra ◽  
Lidya - Makmur

The aim of this study is to investigate the influence of price and service quality toward the customer satisfaction of Batik Air Banda Aceh. 96 respondents of this study are collected through a questionnaire. The hypothesis was tested using multiple linear regression, F-test, and t-test, to know simultaneously and partially the influence of independent variable toward the dependent variable at a 95% confidence level (α = 0.05). The result, based on the t-test and partial test, shows that price and service quality influence the customer satisfaction level of Batik Air in Banda Aceh. Implication and suggestion also discussed in this study.


Author(s):  
Dejin Tang ◽  
Xiaoming Zhou ◽  
Jie Jiang ◽  
Caiping Li

With the characteristics of LIDAR system, raw point clouds represent both terrain and non-terrain surface. In order to generate DTM, the paper introduces one improved filtering method based on the segment-based algorithms. The method generates segments by clustering points based on surface fitting and uses topological and geometric properties for classification. In the process, three major steps are involved. First, the whole datasets is split into several small overlapping tiles. For each tile, by removing wall and vegetation points, accurate segments are found. The segments from all tiles are assigned unique segment number. In the following step, topological descriptions for the segment distribution pattern and height jump between adjacent segments are identified in each tile. Based on the topology and geometry, segment-based filtering algorithm is performed for classification in each tile. Then, based on the spatial location of the segment in one tile, two confidence levels are assigned to the classified segments. The segments with low confidence level are because of losing geometric or topological information in one tile. Thus, a combination algorithm is generated to detect corresponding parts of incomplete segment from multiple tiles. Then another classification algorithm is performed for these segments. The result of these segments will have high confidence level. After that, all the segments in one tile have high confidence level of classification result. The final DTM will add all the terrain segments and avoid duplicate points. At the last of the paper, the experiment show the filtering result and be compared with the other classical filtering methods, the analysis proves the method has advantage in the precision of DTM. But because of the complicated algorithms, the processing speed is little slower, that is the future improvement which should been researched.


2022 ◽  
Author(s):  
THEODORE MODIS

Look-up tables and graphs are provided for determining the uncertainties during logistic fits, on the three parameters M, α and to describing an S-curve of the form: S(t) = M/(1+exp(-α(t-t0))).The uncertainties and the associated confidence levels are given as a function of the uncertainty on the data points and the length of the historical period. Correlations between these variables are also examined; they make “what-if” games possible even before doing the fit.The study is based on some 35,000 S-curve fits on simulated data covering a variety of conditions and carried out via a χ2 minimization technique. A rule-of-thumb general result is that, given at least half of the S-curve range and a precision of better than 10% on each historical point, the uncertainty on M will be less than 20% with 90% confidence level.


2019 ◽  
Vol 14 (1) ◽  
pp. 2-21 ◽  
Author(s):  
Lindsey Sikora ◽  
Karine Fournier ◽  
Jamie Rebner

Abstract Objective – Academic librarians consistently offer individualized help to students and researchers. Few studies have empirically examined the impact of individualized research consultations (IRCs). For many librarians, IRCs are an integral part of their teaching repertoire. However, without any evidence of an IRC’s effectiveness or value, one might ask if it’s worth investing so much time and effort. Our study explored the impact of IRCs on students' search techniques and self-perceived confidence levels. We attempted to answer the following questions: 1) Do IRCs improve students’ information searching techniques, including the proper use of keywords and/or subject headings, the accurate use of Boolean operators, and the appropriate selection of specialized resources/databases? 2) Do IRCs influence students’ confidence level in performing effective search strategies? Methods – Our study used a mixed-methods approach. Our participants were students from the Faculties of Health Sciences and Medicine at the University of Ottawa, completing an undergraduate or graduate degree, and undertaking a research or thesis project. Participants were invited to complete two questionnaires, one before and one after meeting with a librarian. The questionnaires consisted of open-ended and multiple choice questions, which assessed students' search techniques, their self-perceived search techniques proficiency and their confidence level. A rubric was used to score students' open-ended questions, and self-reflective questions were coded and analyzed for content using the software QSR NVivo. Results – Twenty-nine completed pre and posttests were gathered from February to September 2016. After coding the answers using the rubric, two paired-samples t-tests were conducted. The first t-test shows that students’ ability to use appropriate keywords was approaching statistical significance. The second t-test showed a statistically significant increase in students’ ability to use appropriate search strings from the pretest to the posttest. We performed a last paired-samples t-test to measure students’ confidence level before and after the appointment, and a statistically significant increase in confidence level was found. Conclusion – Out of three paired t-tests performed, two showed a statistically significant difference from the pretest to the posttest, with one t-test approaching statistical significance. The analysis of our qualitative results also supports the statement that IRCs have a positive real impact on students’ search techniques and their confidence levels. Future research may explore specific techniques to improve search strategies across various disciplines, tips to improve confidence levels, and exploring the viewpoint of librarians.  


Sign in / Sign up

Export Citation Format

Share Document