How to certify reference materials: by voting or by exact weights

2021 ◽  
Vol 04 (1) ◽  
pp. 20-26
Author(s):  
A. B. Kopiltsova ◽  
◽  
B. P. Tarasov ◽  
U. A. Kopiltsov ◽  
◽  
...  

The problems of using reference materials (RM) of oil and petroleum products humidity is discussed for different programs of certification: according to accuracy of the preparation procedure (RM-PP), according to the results of the interlaboratory studies (RM-INTERLAB) and using reference methods (RM-RM). Similar NIST samples (SRM 2271 and 272) were used for comparison. The problem of using CO-INTERLAB for control the accuracy of standard methods is the absence of the true value, this task is not even raised. "Accuracy control" has a different emphasis in this situation: RM-INTERLAB allow to select, "voting by majority" among the general population of the laboratory and cut the others. Therefore, their main application is qualification tasks. This approach is basically incorrect for analyzers. In the case of RM-PP and RM-RM, the main problem is the difference in the composition of real samples and ideal matrices of RM’s. Their main application is the control the accuracy of the analyzers in the absence of interfering influences. The "cheap" RM’s production technologies do not allow the omprehensive control of the real oil samples. The complication of oil technologies and the use of heavy oils in refining could provide the progress in RM’s.

2010 ◽  
Vol 8 (3) ◽  
pp. 594-601 ◽  
Author(s):  
Henryk Matusiewicz ◽  
Ewa Stanisz

AbstractSample preparation methods for non-separation cold vapor atomic absorption spectrometry (CVAAS) sequential inorganic mercury speciation in biological certified reference materials (CRMs) were investigated. The methylmercury concentration was calculated as the difference between total and inorganic mercury. Microwave-assisted decomposition method, and three ultrasonic extraction procedures based on acid leaching with HCl and HCOOH and solubilization with TMAH were employed as sample preparation methods. The replacement of a sample decomposition procedure by extraction prior to analysis by CVAAS, as well as the aspect of speciation analysis is discussed. The limits of detection in the sample were determined as 50 and 10 ng L−1 for inorganic and total mercury, which corresponds to absolute detection limits of 40 and 8 ng g−1 for inorganic and total mercury, respectively. The results were in good agreement with the 95% confidence level t-test of the certified values for total and inorganic mercury in the reference materials investigated. From the analysis of the CRMs, it was evident that the difference between the total and inorganic mercury concentrations agrees with the methylmercury concentration. The relative standard deviation was better than 11% for most of the samples.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Roger P. A’Hern

Abstract Background Accuracy can be improved by taking multiple synchronous samples from each subject in a study to estimate the endpoint of interest if sample values are not highly correlated. If feasible, it is useful to assess the value of this cluster approach when planning studies. Multiple assessments may be the only method to increase power to an acceptable level if the number of subjects is limited. Methods The main aim is to estimate the difference in outcome between groups of subjects by taking one or more synchronous primary outcome samples or measurements. A summary statistic from multiple samples per subject will typically have a lower sampling error. The number of subjects can be balanced against the number of synchronous samples to minimize the sampling error, subject to design constraints. This approach can include estimating the optimum number of samples given the cost per subject and the cost per sample. Results The accuracy improvement achieved by taking multiple samples depends on the intra-class correlation (ICC). The lower the ICC, the greater the benefit that can accrue. If the ICC is high, then a second sample will provide little additional information about the subject’s true value. If the ICC is very low, adding a sample can be equivalent to adding an extra subject. Benefits of multiple samples include the ability to reduce the number of subjects in a study and increase both the power and the available alpha. If, for example, the ICC is 35%, adding a second measurement can be equivalent to adding 48% more subjects to a single measurement study. Conclusion A study’s design can sometimes be improved by taking multiple synchronous samples. It is useful to evaluate this strategy as an extension of a single sample design. An Excel workbook is provided to allow researchers to explore the most appropriate number of samples to take in a given setting.


2021 ◽  
Author(s):  
Stephan van der Westhuizen ◽  
Gerard Heuvelink ◽  
David Hofmeyr

<p>Digital soil mapping (DSM) may be defined as the use of a statistical model to quantify the relationship between a certain observed soil property at various geographic locations, and a collection of environmental covariates, and then using this relationship to predict the soil property at locations where the property was not measured. It is also important to quantify the uncertainty with regards to prediction of these soil maps. An important source of uncertainty in DSM is measurement error which is considered as the difference between a measured and true value of a soil property.</p><p>The use of machine learning (ML) models such as random forests (RF) has become a popular trend in DSM. This is because ML models tend to be capable of accommodating highly non-linear relationships between the soil property and covariates. However, it is not clear how to incorporate measurement error into ML models. In this presentation we will discuss how to incorporate measurement error into some popular ML models, starting with incorporating weights into the objective function of ML models that implicitly assume a Gaussian error. We will discuss the effect that these modifications have on prediction accuracy, with reference to simulation studies.</p>


2021 ◽  
Vol 15 (11) ◽  
pp. 3050-3053
Author(s):  
Maida Saadat ◽  
Muhammad Mawaz Anjum ◽  
Faiza Farooq ◽  
Rehan Aslam Gill ◽  
Abeer Yasin ◽  
...  

Aim: To determine the diagnostic accuracy and epidemiology of placenta accreta spectrum (PAS) in patients of placenta previa. Methods: PubMed, Google Scholar, ClinicalTrials.gov and MEDLINE were searched between January1992 and December 2020. Studies on placenta previa complicated by PAS diagnosed in a defined obstetric population. This research was carried out using standard methods and protocols and keeping in view Newcastle-Ottawa scale for observation and assessment of case study along with the difference approved by consensus. The overall diagnostic accuracy of ultrasonographic findings is the main outcome of this study, whereas the prevalence of placenta accreta in patients of placenta previa and its incidence among different countries all over the world is also described. Results: In this review study, about 300 articles were evaluated. More over about 15 prospective and 14 retrospective case studies incorporated for assessment having complication with placenta previa and PAS. According to the meta-analysis, a significant (p<0.001) heterogeneity was found between case research that evaluate PAS prevalence and incidence in the placenta previa cohort. The median prevalence in case of placenta previa along with PAS came out to be 0.113% (IQR 0.048–0.17).Whereas incidence in females having placenta previa along with complication of PAS came out to be 11.3%. Conclusions: The high level of diversity observed in results obtained by diagnostic and qualitative data showed strong emphasis should be made on implementation of standard methods and protocols for assessment and diagnosis of pregnancy complication like placenta previa, its type and PAS. Keywords: Sonography placenta previa, placenta accreta spectrum


Author(s):  
Wenwen Cheng ◽  
J. O. Spengler ◽  
Robert D. Brown

Current methods for estimating heat vulnerability of young athletes use a heat index (HI) or a wet bulb globe thermometer (WBGT), neither of which fully include the environmental or physiological characteristics that can affect a person’s heat budget, particularly where activity occurs on a synthetic surface. This study analyzed and compared the standard methods, HI and WBGT, with a novel and more comprehensive method termed COMFA-Kid (CK) which is based on an energy budget model explicitly designed for youth. The COMFA model was presented at the same time to demonstrate the difference between a child and an adult during activity. Micrometeorological measurements were taken at a synthetic-surfaced football field during mid-day in hot environmental conditions. Standard methods (HI and WBGT) indicated that conditions on the field were relatively safe for youth to engage in activities related to football practice or games, whereas the CK method indicated that conditions were dangerously hot and could lead to exertional heat illness. Estimates using the CK method also indicated that coaches and staff standing on the sidelines, and parents sitting in the stands, would not only be safe from heat but would be thermally comfortable. The difference in thermal comfort experienced by coaches and staff off the field, versus that experienced by young players on the field, could affect decision making regarding the duration and intensity of practices and time in the game. The CK method, which is easy to use and available for modification for specific conditions, would lead to more accurate estimates of heat safety on outdoor synthetic surfaces in particular, and in sports with a high prevalence of heat illness such as football, and should be considered as a complementary or alternative preventive measure against heat.


1975 ◽  
Vol 58 (4) ◽  
pp. 689-691
Author(s):  
Harold N Macfarland

Abstract Two kinds of problems associated with developing standardized procedures for the safety evaluation of new compounds are identified. The first of these is the question of the desirability of using such standard methods. It is concluded that a basic set of procedures is to be recommended, but this should be supplemented with special tests as may be indicated. The second problem is connected with the technical difficulties of any given type of assay and is normally dealt with in terms of the state of the art at the time. Assays by the inhalation route tend to be custom designed and do not follow standard protocols. One of the causes of this situation is the propensity of individual investigators to design de novo the equipment used to effect exposure of animals to airborne substances. Second, some confusion exists because investigators do not always appreciate that the concentration-time product is not the same as the true dose received by the exposed subjects and this may lead to anomalies when dose-response relationships are being characterized. It is suggested that interlaboratory studies be undertaken to ascertain the variability that might be expected in independent assays of inhalation toxicity.


2019 ◽  
Vol 124 ◽  
pp. 01017
Author(s):  
O. S. Sirotkin ◽  
A. M. Pavlova ◽  
R. O. Sirotkin ◽  
A. E. Buntin

Within the unified model of chemical bonding and methods of quantitative assessment of components of mixed chemical interaction between the elements in compounds, developed by the authors, a new approach was developed to assess the structural and energy characteristics of substances and fuels. It comprises establishing a correlation between the difference of bonds’ chemical components of reactants and end products. Changes in the chemical bond components affect such characteristics of chemical reactions as the heat of formation of the reaction products, their redox properties, whether reaction is endoor exothermic, as well as the heat of fuel combustion reactions. This approach is an additional reserve for improving the methods for assessing the energy characteristics of fuels and increasing the efficiency of energy production technologies.


1990 ◽  
Vol 36 (2) ◽  
pp. 366-369 ◽  
Author(s):  
S M Marcovina ◽  
J L Adolphson ◽  
M Parlavecchia ◽  
J J Albers

Abstract A common accuracy-based standardization program is indispensable for establishing reference intervals for the clinical use of apolipoproteins. The development and distribution of reference materials and quality-control materials that do not exhibit matrix effects between methods is essential to the standardization process. We examined the suitability of lyophilized material as a common reference material for the measurement of apolipoproteins A-I and B. We determined values for apolipoproteins A-I and B in frozen and lyophilized serum pools, using different immunochemical approaches. We found little or no differences in apolipoprotein A-I values between frozen and lyophilized pools as determined by the different methods. In contrast, values for apolipoprotein B in lyophilized samples were consistently lower than those obtained for frozen samples. After adjusting for the effect of dilution due to reconstitution, the difference in the apolipoprotein B values for lyophilized as compared with frozen samples ranged from -26% to 4%, depending upon the assay method. Evidently, serum pools in lyophilized from are not a suitable matrix for reference materials for apolipoprotein B measurements but can be used for apolipoprotein A-I measurements.


Author(s):  
Shuichi Fukuda

Our traditional design has been producer-centric. But to respond to the frequent and extensive changes and increasing diversification, we have to change our design to user-centric. But it is not a straightforward extension and just listening to the voice of the customer is not enough. Value is defined as value = performance/cost, but performance has been interpreted in the current design solely as functions of a final product and all other factors such as manufacturing are considered as cost. This framework has been effective until recently because there has been asymmetry of information between the producer and the customer. As the producer had a greater amount of information, they only had to produce a product which they think best and it really satisfied the customer who needed a product. The 20th century was the age of products. But as we approached the 21st century, we entered information society and sometimes the customer knows more than the producer. Thus, such a one way flow of development to fill the information (water level) gap doe not work any more, because the gap is quickly disappearing. The difference was evaluated as value in the traditional design and it meant profit for the producer. Therefore, a new approach to create value is called for. One solution is to raise the water level together by the producer and the customer so that the level increase serves for profit for the producer and for the true value for the customer. In order to achieve this goal, we have to identify what is the true value for the customer. We have to step outside of our traditional notion of value being functions of a final product. What is the true value for the customer? It is customers’ satisfaction. Then, how can we satisfy our customers. This paper points out if we note that our customers are very active and creative, we can provide satisfaction to them by getting them involved in the whole process of product development. Then our customers can enjoy not only product experience but also process experience, which will satisfy their needs for self actualization and challenge, i.e., their highest human needs.


Author(s):  
Yanjun Zhang ◽  
Tingting Xia ◽  
Mian Li

Abstract Various types of uncertainties, such as parameter uncertainty, model uncertainty, metamodeling uncertainty may lead to low robustness. Parameter uncertainty can be either epistemic or aleatory in physical systems, which have been widely represented by intervals and probability distributions respectively. Model uncertainty is formally defined as the difference between the true value of the real-world process and the code output of the simulation model at the same value of inputs. Additionally, metamodeling uncertainty is introduced due to the usage of metamodels. To reduce the effects of uncertainties, robust optimization (RO) algorithms have been developed to obtain solutions being not only optimal but also less sensitive to uncertainties. Based on how parameter uncertainty is modeled, there are two categories of RO approaches: interval-based and probability-based. In real-world engineering problems, both interval and probabilistic parameter uncertainties are likely to exist simultaneously in a single problem. However, few works have considered mixed interval and probabilistic parameter uncertainties together with other types of uncertainties. In this work, a general RO framework is proposed to deal with mixed interval and probabilistic parameter uncertainties, model uncertainty, and metamodeling uncertainty simultaneously in design optimization problems using the intervals-of-statistics approaches. The consideration of multiple types of uncertainties will improve the robustness of optimal designs and reduce the risk of inappropriate decision-making, low robustness and low reliability in engineering design. Two test examples are utilized to demonstrate the applicability and effectiveness of the proposed RO approach.


Sign in / Sign up

Export Citation Format

Share Document