MATERIALS INDEX**Single data points in graphs, and entries in tables on pp. 373–376 non included here.

Keyword(s):  
2011 ◽  
Vol 54 (2) ◽  
pp. 75-84 ◽  
Author(s):  
Sandeep Kalelkar ◽  
Jay Postlewaite

Cleanroom wipers have long played an indispensable role in managing contamination in controlled environments. From wiping residues on hard surfaces to applying cleaning solutions, wipers perform a variety of tasks that help maintain the cleanliness levels desired in a given cleanroom environment. This makes the selection of cleanroom wipers a critical decision in any controlled environment. One common way to distinguish between cleanroom wipers of similar structural design is to compare test results across a variety of criteria, according to recommended practices by organizations such as the IEST. However, these results are typically listed as single data points for a given test and are meant to indicate either "typical values," or even target specifications, in some instances. This approach is inherently limited and ineffective in assessing the true levels of cleanliness of a given wiper product. In this study, we review the test methods that are used to evaluate cleanroom wipers and present a new and improved approach by which users can evaluate their cleanliness. We provide a framework by which the consistency of the cleanliness of cleanroom wipers can be assessed in a statistically relevant manner. Finally, we demonstrate the value of using consistency of test results rather than a singular test result as the true measure of wiper quality.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Andrew Shaffer ◽  
Elizabeth Rahn ◽  
Kenneth Saag ◽  
Amy Mudano ◽  
Angelo Gaffo

Abstract Background Previous studies have noted significant variation in serum urate (sUA) levels, and it is unknown how this influences the accuracy of hyperuricemia classification based on single data points. Despite this known variability, hyperuricemic patients are often used as a control group in gout studies. Our objective was to determine the accuracy of hyperuricemia classifications based on single data points versus multiple data points given the degree of variability observed with serial measurements of sUA. Methods Data was analyzed from a cross-over clinical trial of urate-lowering therapy in young adults without a gout diagnosis. In the control phase, sUA levels used for this analysis were collected at 2–4 week intervals. Mean coefficient of variation for sUA was determined, as were rates of conversion between normouricemia (sUA ≤6.8 mg/dL) and hyperuricemia (sUA > 6.8 mg/dL). Results Mean study participant (n = 85) age was 27.8 ± 7.0 years, with 39% female participants and 41% African-American participants. Mean sUA coefficient of variation was 8.5% ± 4.9% (1 to 23%). There was no significant difference in variation between men and women, or between participants initially normouricemic and those who were initially hyperuricemic. Among those initially normouricemic (n = 72), 21% converted to hyperuricemia during at least one subsequent measurement. The subgroup with initial sUA < 6.0 (n = 54) was much less likely to have future values in the range of hyperuricemia compared to the group with screening sUA values between 6.0–6.8 (n = 18) (7% vs 39%, p = 0.0037). Of the participants initially hyperuricemic (n = 13), 46% were later normouricemic during at least one measurement. Conclusion Single sUA measurements were unreliable in hyperuricemia classification due to spontaneous variation. Knowing this, if a single measurement must be used in classification, it is worth noting that those with an sUA of < 6.0 mg/dL were less likely to demonstrate future hyperuricemic measurements and this could be considered a safer threshold to rule out intermittent hyperuricemia based on a single measurement point. Trial registration Data from parent study ClinicalTrials.gov Identifier: NCT02038179.


2020 ◽  
Author(s):  
Andrew Shaffer ◽  
Elizabeth Rahn ◽  
Kenneth Saag ◽  
Amy Mudano ◽  
Angelo Gaffo

Abstract Background: Previous studies have noted significant variation in serum urate (sUA) levels, and it is unknown how this influences the accuracy of hyperuricemia classification based on single data points. Despite this known variability, hyperuricemic patients are often used as a control group in gout studies. Our objective was to determine the accuracy of hyperuricemia classifications based on single data points versus multiple data points given the degree of variability observed with serial measurements of sUA.Methods: Data was analyzed from a cross-over clinical trial of urate-lowering therapy in young adults without a gout diagnosis. In the control phase, sUA levels used for this analysis were collected at 2-4 week intervals. Mean coefficient of variation for sUA was determined, as were rates of conversion between normouricemia (sUA ≤6.8 mg/dL) and hyperuricemia (sUA >6.8 mg/dL). Results: Mean study participant (n = 85) age was 27.8 ± 7.0 years, with 39% female participants and 41% African-American participants. Mean sUA coefficient of variation was 8.5% ± 4.9% (1% to 23%). There was no significant difference in variation between men and women, or between participants initially normouricemic and those who were initially hyperuricemic.Among those initially normouricemic (n=72), 15% converted to hyperuricemia during at least one subsequent measurement. The subgroup with initial sUA <6.0 (n=54) was much less likely to have future values in the range of hyperuricemia compared to the group with screening sUA values between 6.0-6.8 (n=18) (7% vs 39%, p = 0.0037).Of the participants initially hyperuricemic (n=13), 46% were later normouricemic during at least one measurement.Conclusion: Single sUA measurements were unreliable in hyperuricemia classification due to spontaneous variation. Those with an sUA of <6.0 mg/dL were less likely to demonstrate future hyperuricemic measurements and this could be considered a safer threshold to rule out intermittent hyperuricemia based on a single measurement point.Trial registration: Data from parent study ClinicalTrials.gov Identifier: NCT02038179


Author(s):  
Michal Kaut

AbstractIn this paper, we present and compare several methods for generating scenarios for stochastic-programming models by direct selection from historical data. The methods range from standard sampling and k-means, through iterative sampling-based selection methods, to a new moment-based optimization approach. We compare the models on a simple portfolio-optimization model and show how to use them in a situation when we are selecting whole sequences from the data, instead of single data points.


1995 ◽  
Vol 61 (1) ◽  
pp. 95-101 ◽  
Author(s):  
P. C. N. Groenewald ◽  
A. V. Ferreira ◽  
H. J. van der Merwe ◽  
S. C. Slippers

AbstractThe milk production of 63 5-year-old Merino ewes was measured over a 16-week period after lambing. The purpose was to find a suitable mathematical model to represent the lactation curve of Merino sheep and to estimate the parameters of the model for an individual ewe from a single data point in early lactation. Three models were considered, the three-parameter Wood model, yn = nb exp(a + en), the four-parameter Morant model, yn = exp(a + bn + en2 + d/n and the six-parameter Grossman model, yn = a1b1 - [tanh2 (b1n - c1))] + a2b2[1 – tanh2 (b2(n -c2,))].The Grossman model was found to be inappropriate for the available data, while there seems to be little difference in the suitability of the other two models. The Wood and Morant models both seem adequate to represent the lactation curve. A pattern in the estimated residuals suggests possible autocorrelations in the errors, but this is inconclusive due to the limited number of data points per animal.The correlation between the estimated parameters of the model and the daily yield measured during the 1st week of lactation enabled us to use linear regression to estimate the lactation curve of an individual animal based on the 1st week's yield. Confidence and prediction intervals for the yield during the rest of the lactation period may then also be constructed. This makes it possible to extend incomplete milk records for use in genetic evaluation, formulation of rations and economic evaluations.


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2390
Author(s):  
Peihuang Huang ◽  
Pei Yao ◽  
Zhendong Hao ◽  
Huihong Peng ◽  
Longkun Guo

Witnessing the tremendous development of machine learning technology, emerging machine learning applications impose challenges of using domain knowledge to improve the accuracy of clustering provided that clustering suffers a compromising accuracy rate despite its advantage of fast procession. In this paper, we model domain knowledge (i.e., background knowledge or side information), respecting some applications as must-link and cannot-link sets, for the sake of collaborating with k-means for better accuracy. We first propose an algorithm for constrained k-means, considering only must-links. The key idea is to consider a set of data points constrained by the must-links as a single data point with a weight equal to the weight sum of the constrained points. Then, for clustering the data points set with cannot-link, we employ minimum-weight matching to assign the data points to the existing clusters. At last, we carried out a numerical simulation to evaluate the proposed algorithms against the UCI datasets, demonstrating that our method outperforms the previous algorithms for constrained k-means as well as the traditional k-means regarding the clustering accuracy rate although with a slightly compromised practical runtime.


Author(s):  
Zenji Horita ◽  
Ryuzo Nishimachi ◽  
Takeshi Sano ◽  
Minoru Nemoto

Absorption correction is often required in quantitative x-ray microanalysis of thin specimens using the analytical electron microscope. For such correction, it is convenient to use the extrapolation method[l] because the thickness, density and mass absorption coefficient are not necessary in the method. The characteristic x-ray intensities measured for the analysis are only requirement for the absorption correction. However, to achieve extrapolation, it is imperative to obtain data points more than two at different thicknesses in the identical composition. Thus, the method encounters difficulty in analyzing a region equivalent to beam size or the specimen with uniform thickness. The purpose of this study is to modify the method so that extrapolation becomes feasible in such limited conditions. Applicability of the new form is examined by using a standard sample and then it is applied to quantification of phases in a Ni-Al-W ternary alloy.The earlier equation for the extrapolation method was formulated based on the facts that the magnitude of x-ray absorption increases with increasing thickness and that the intensity of a characteristic x-ray exhibiting negligible absorption in the specimen is used as a measure of thickness.


2015 ◽  
Vol 14 (4) ◽  
pp. 165-181 ◽  
Author(s):  
Sarah Dudenhöffer ◽  
Christian Dormann

Abstract. The purpose of this study was to replicate the dimensions of the customer-related social stressors (CSS) concept across service jobs, to investigate their consequences for service providers’ well-being, and to examine emotional dissonance as mediator. Data of 20 studies comprising of different service jobs (N = 4,199) were integrated into a single data set and meta-analyzed. Confirmatory factor analyses and explorative principal component analysis confirmed four CSS scales: disproportionate expectations, verbal aggression, ambiguous expectations, disliked customers. These CSS scales were associated with burnout and job satisfaction. Most of the effects were partially mediated by emotional dissonance. Further analyses revealed that differences among jobs exist with regard to the factor solution. However, associations between CSS and outcomes are mainly invariant across service jobs.


1997 ◽  
Vol 78 (02) ◽  
pp. 855-858 ◽  
Author(s):  
Armando Tripodi ◽  
Veena Chantarangkul ◽  
Marigrazia Clerici ◽  
Barbara Negri ◽  
Pier Mannuccio Mannucci

SummaryA key issue for the reliable use of new devices for the laboratory control of oral anticoagulant therapy with the INR is their conformity to the calibration model. In the past, their adequacy has mostly been assessed empirically without reference to the calibration model and the use of International Reference Preparations (IRP) for thromboplastin. In this study we reviewed the requirements to be fulfilled and applied them to the calibration of a new near-patient testing device (TAS, Cardiovascular Diagnostics) which uses thromboplastin-containing test cards for determination of the INR. On each of 10 working days citrat- ed whole blood and plasma samples were obtained from 2 healthy subjects and 6 patients on oral anticoagulants. PT testing on whole blood and plasma was done with the TAS and parallel testing for plasma by the manual technique with the IRP CRM 149S. Conformity to the calibration model was judged satisfactory if the following requirements were met: (i) there was a linear relationship between paired log-PTs (TAS vs CRM 149S); (ii) the regression line drawn through patients data points, passed through those of normals; (iii) the precision of the calibration expressed as the CV of the slope was <3%. A good linear relationship was observed for calibration plots for plasma and whole blood (r = 0.98). Regression lines drawn through patients data points, passed through those of normals. The CVs of the slope were in both cases 2.2% and the ISIs were 0.965 and 1.000 for whole blood and plasma. In conclusion, our study shows that near-patient testing devices can be considered reliable tools to measure INR in patients on oral anticoagulants and provides guidelines for their evaluation.


Sign in / Sign up

Export Citation Format

Share Document