Long-term retrospective control procedures for monitoring analytical performance of laboratory instruments.

1984 ◽  
Vol 30 (1) ◽  
pp. 145-149 ◽  
Author(s):  
M L Gozzo ◽  
G Barbaresi ◽  
G Giocoli ◽  
B Zappacosta ◽  
C Zuppi

Abstract We propose a statistical procedure for long-term quality-control of laboratory instruments, including daily, day-to-day, and monthly evaluations. The procedure is based on the unique and unequivocal interpretation of five results for control sera by calculation of a Reliability Index and further manipulations of this unitless parameter. This method, which we have tested during the past two years, allows for monitoring analytical performance and making comparisons with results of interlaboratory surveys. The monthly analytical variability, expressed as "total error," is an indicator of the clinical usefulness of analytical results.

2006 ◽  
Vol 96 (11) ◽  
pp. 584-589 ◽  
Author(s):  
Frits Haverkate ◽  
Cornelis Kluft ◽  
Piet Meijer

SummaryTo achieve a reliable analytical quality for both monitoring and diagnostic testing, laboratories need to fulfil the widely accepted analytical performance goals based on the biological variation of the analytes of testing. Not only is the short-term analytical performance, which regularly is assessed by internal quality control procedures, of importance, but also the long-term analytical performance. To assess the long-term analytical performance, data obtained from an external quality assessment programme can be used. In this study we have used the evaluation model designed by the ECAT Foundation for the assessment of the longterm analytical performance, including imprecision, bias and total analytical error. The model was applied to the data from 136 different laboratories for the assay of antithrombin (activity), protein C (activity and antigen) and protein S (activity, total and free antigen). The imprecision (median; range), reflected by the long-term analytical coefficient of variation (LCVA), was the lowest for antithrombin (7.6%; 2.6 – 43.8%) and the highest for protein S activity (17.2%; 4.3 – 88.6%). For bias and total error the same pattern was observed (antithrombin: 3.8%; 0.3 – 17.1% and 9.1%; 3.4 – 34.3%, respectively; protein S activity: 12.8%; 3.1 – 34.8% and 24.5%; 9.9 – 87.0%, respectively). For the majority of the laboratories (70 – 85%) the imprecision contributes considerably more to the total error than the bias. However the effect of the bias on the analytical quality is not negligible. Assays for antithrombin, protein C and protein S are mainly used for diagnostic testing. About 70 – 100% of the laboratories can fulfil the desirable performance goal for imprecision. The desirable performance goal for bias was reached by 50 – 95% of the laboratories. In all cases the highest numbers of laboratories fulfilling performance goals was obtained for the protein C variables. To improve the analytical quality in assays of antithrombin, protein C and protein S it is highly recommended that primarily imprecision (non-systematic failures) be suppressed. However the effect of the bias (systematic failures) on the analytical quality should not be neglected. A useful tool for determining the imprecision (LCVA) and bias is the long-term analytical performance evaluation model as used by the ECAT Foundation.


2013 ◽  
Vol 30 (3) ◽  
pp. 609-625 ◽  
Author(s):  
Giuseppe M. R. Manzella ◽  
Marco Gambetta

Abstract Near-real-time quality control procedures for temperature profiles collected from ships of opportunity were implemented during the 1980s in oceans across the world and from the 1990s in the Mediterranean. In this sea, the procedures were originally based on seven steps (detection of end of profile, gross range check, position control, elimination of spikes, Gaussian smoothing and resampling at 1-m intervals, general malfunction control, and comparison with climatology), complemented with initial and final visual checks. The quality of data derived from a comparison with historical data (namely, climatology) depends on the availability of a huge amount of data that can statistically represent the mean characteristics of the seawater. A significant amount of data has been collected, and the existing temperature database in the Mediterranean can now provide more information on temporal and spatial variability at monthly and mesoscales, and an improved procedure for data quality control has now been adopted. New “best” estimates of monthly temperature profiles are calculated by using a maximum likelihood method. It has been found that more than one “best estimate” temperature can be defined in particular areas and depths, as a consequence of climate variability. Additional near-real-time control procedures have been included in order to provide information on long-term variability associated with data. This information is included in metafiles to be used for reanalysis and studies on long-term variability and changes.


1987 ◽  
Vol 33 (12) ◽  
pp. 2267-2271 ◽  

Abstract A method for measuring glycated hemoglobin (Hb A1c) and an accompanying method of specimen transport to a central laboratory were developed for the multicenter Diabetes Control and Complications Trial (DCCT). In the DCCT, results for Hb A1c are used to assess chronic glycemic control for data collection and patient management. During the feasibility phase of the trial, central (CHL) and backup laboratories using automated, "high-performance" ion-exchange liquid-chromatographic methods were established. Whole-blood samples were stored (4 degrees C) at each of the 21 clinical centers for up to 72 h before air-express shipment to the CHL. Quality-control procedures included daily analyses of three calibration specimens. A pooled hemolysate was assayed frequently over time as a long-term quality control (LTQC). After 18 months, within- and between-run CVs were less than 6%. Mean values for split duplicate samples assayed in a masked fashion at the CHL were nearly identical. LTQC results indicated no significant assay drift over time. More than 6000 samples were assayed (mean interval between obtaining the blood sample and completing the assay: less than six days). Hb A1c evidently can be precisely and reliably measured in the context of a long-term, multicenter trial such as the DCCT.


1997 ◽  
Vol 43 (11) ◽  
pp. 2164-2168 ◽  
Author(s):  
Patricia C Fallest-Strobl ◽  
Elin Olafsdottir ◽  
Donald A Wiebe ◽  
James O Westgard

Abstract The National Cholesterol Education Program (NCEP) performance specifications for methods that measure triglycerides, HDL-cholesterol, and LDL-cholesterol have been evaluated by deriving operating specifications from the NCEP analytical total error requirements and the clinical requirements for interpretation of the tests. We determined the maximum imprecision and inaccuracy that would be allowable to control routine methods with commonly used single and multirule quality-control procedures having 2 and 4 control measurements per run, and then compared these estimates with the NCEP guidelines. The NCEP imprecision specifications meet the operating imprecision necessary to assure meeting the NCEP clinical quality requirements for triglycerides and HDL-cholesterol but not for LDL-cholesterol. More importantly, the NCEP imprecision specifications are not adequate to assure meeting the NCEP analytical total error requirements for any of these three tests. Our findings indicate that the NCEP recommendations fail to adequately consider the quality-control requirements necessary to detect medically important systematic errors.


2021 ◽  
Author(s):  
Orestis Faklaris ◽  
Leslie Bancel-Vallee ◽  
Aurelien Dauphin ◽  
Baptiste Monterroso ◽  
Perrine Frere ◽  
...  

Reliable, reproducible and comparable results are what biology requires from microscopy. To achieve that level of confidence, monitoring the stability of the microscope performance over time with standardized quality testing routines is essential for mining quantitative data. Three levels of microscope quality control procedures should be considered: i) usage of accessible and affordable tools and samples, ii) execution of easy and fast, preferably automatized, acquisition protocols, iii) analysis of data in the most automated way possible with adequate metrics for long-term monitoring. In this paper, we test the acquisition protocols on the mainly used microscope techniques (wide-field, spinning disk and confocal microscopy) with simple quality control tools. Seven protocols specify metrics on measuring the lateral and axial resolution (Point-Spread Function) of the system, field flatness, chromatic aberrations and co-registration, illumination power monitoring and stability, stage drift and positioning repeatability and finally temporal and spatial noise sources of camera detectors. We designed an ImageJ/FiJi java plugin named MetroloJ_QC to incorporate the identified metrics and automatize the data processing for the analysis. After processing and comparing the data of microscopes from more than ten imaging facilities, we test the robustness of the metrics and the protocols by determining experimental limit values. Our results give a first extensive characterization of the quality control procedures of a light microscope, with an automated data processing and experimental limit values that can be used by core facility staff and researchers to monitor the microscope performance over time.


2002 ◽  
Vol 48 (7) ◽  
pp. 1011-1015 ◽  
Author(s):  
Piet Meijer ◽  
Moniek PM de Maat ◽  
Cornelis Kluft ◽  
Frits Haverkate ◽  
Hans C van Houwelingen

Abstract Background: It is important for a laboratory to know the stability of performance of laboratory tests over time. The aim of this study was to adapt from the field of clinical chemistry a method to assess the long-term analytical performance of hemostasis field methods. Methods: The linear regression model was used to compare the laboratory results with the consensus mean value of a survey. This model was applied to plasma antithrombin activity using the data for 82 laboratories, collected between 1996 and 1999 in the European Concerted Action on Thrombosis (ECAT) external quality assessment program. The long-term total, random, and systematic error were calculated. The variables introduced to define the long-term performance in this model were the long-term analytical CV (LCVa) and the analytical critical difference (ACD), which indicates the minimum difference necessary between two samples measured on a long-term time-scale to consider them statistically significantly different. Results: The systematic error (bias) ranged from 4.5 to 103 units/L. The random error ranged from 24.4 to 242 units/L. For the majority of the laboratories, random error was the main component (>75%) of the total error. The LCVa, after adjustment for the contribution of the bias, ranged from 2.8% to 48%. The ACD ranged from 78 to 1290 units/L with a median value of 190 units/L. No statistically significant differences were observed for either LCVa or ACD between the two different measurement principles for antithrombin activity based on the inhibition of either thrombin or factor Xa. Conclusions: This linear regression model is useful for assessing the total error, random error, and bias for hemostasis field methods. The LCVa and ACD for measurement on a long-term time-scale appear to be useful for assessing the long-term analytical performance.


1964 ◽  
Vol 3 (03/04) ◽  
pp. 117-127
Author(s):  
G. Wagner

SummaryQuality control of medical data and judgements plays an essential part in the efforts to put medicine on a sounder basis. In the past, control procedures have been neglected because of the bulky work often connected with them; now, however, modern mechanical and electronic equipment opens up the possibility to check thoroughly with a justifiable expenditure of time and effort. Profound checking of the data is especially necessary if statistical analyses are to be carried out, because even extremely sophisticated calculations will in reality be of little value if they rest on uncertain foundations.Scientific quality control makes use of different methods. Formal errors (e. g., slips of the pen, mistakes in coding, etc.) can often be detected by comparing the data with confidence limits or by testing for incompatibility. Such controls can very well be made by computers. Frequency distributions can occasionally be checked for plausibility. For this sort of control analysis, the probability paper is best suited. The most effective possibility to avoid or to find errors is repeated or multiple examination. Statistical models can be used to estimate the effects of defined errors on the results of an investigation. Possibilities of quality control should already be envisaged in the planning stage of an investigation.


Author(s):  
P M G Broughton ◽  
Roger Holder ◽  
Deborah Ashby

A study has been made of the variations in monthly mean values of 10 serum constituents in subjects participating in two partly-concurrent long-term epidemiological surveys. Closely similar patterns of variation were found in men in both surveys and in men and women in one survey. During the 6 years of the study, four types of variation of the monthly mean concentrations were identified in varying combinations: (i) abrupt changes of less than 2% not detected by quality control procedures; (ii) a gradual drift in mean value; (iii) haphazard variations in mean values; and (iv) seasonal variations in bilirubin and urea, identical in men and women. The implications of these findings for the design of long-term epidemiological surveys, and the criteria for designating variations as seasonal, are discussed.


Author(s):  
Robert Klinck ◽  
Ben Bradshaw ◽  
Ruby Sandy ◽  
Silas Nabinacaboo ◽  
Mannie Mameanskum ◽  
...  

The Naskapi Nation of Kawawachikamach is an Aboriginal community located in northern Quebec near the Labrador Border. Given the region’s rich iron deposits, the Naskapi Nation has considerable experience with major mineral development, first in the 1950s to the 1980s, and again in the past decade as companies implement plans for further extraction. This has raised concerns regarding a range of environmental and socio-economic impacts that may be caused by renewed development. These concerns have led to an interest among the Naskapi to develop a means to track community well-being over time using indicators of their own design. Exemplifying community-engaged research, this paper describes the beginning development of such a tool in fall 2012—the creation of a baseline of community well-being against which mining-induced change can be identified. Its development owes much to the remarkable and sustained contribution of many key members of the Naskapi Nation. If on-going surveying is completed based on the chosen indicators, the Nation will be better positioned to recognize shifts in its well-being and to communicate these shifts to its partners. In addition, long-term monitoring will allow the Naskapi Nation to contribute to more universal understanding of the impacts of mining for Indigenous peoples.


Author(s):  
Lindsey C Bohl

This paper examines a few of the numerous factors that may have led to increased youth turnout in 2008 Election. First, theories of voter behavior and turnout are related to courting the youth vote. Several variables that are perceived to affect youth turnout such as party polarization, perceived candidate difference, voter registration, effective campaigning and mobilization, and use of the Internet, are examined. Over the past 40 years, presidential elections have failed to engage the majority of young citizens (ages 18-29) to the point that they became inclined to participate. This trend began to reverse starting in 2000 Election and the youth turnout reached its peak in 2008. While both short and long-term factors played a significant role in recent elections, high turnout among youth voters in 2008 can be largely attributed to the Obama candidacy and campaign, which mobilized young citizens in unprecedented ways.


Sign in / Sign up

Export Citation Format

Share Document