scholarly journals Effect of analytical run length on quality-control (QC) performance and the QC planning process

1997 ◽  
Vol 43 (11) ◽  
pp. 2149-2154 ◽  
Author(s):  
Curtis A Parvin ◽  
Ann M Gronowski

Abstract The performance measure traditionally used in the quality-control (QC) planning process is the probability of rejecting an analytical run when an out-of-control error condition exists. A shortcoming of this performance measure is that it doesn’t allow comparison of QC strategies that define analytical runs differently. Accommodating different analytical run definitions is straightforward if QC performance is measured in terms of the average number of patient samples to error detection, or the average number of patient samples containing an analytical error that exceeds total allowable error. By using these performance measures to investigate the impact of different analytical run definitions on QC performance demonstrates that during routine QC monitoring, the length of the interval between QC tests can have a major influence on the expected number of unacceptable results produced during the existence of an out-of-control error condition.

1997 ◽  
Vol 43 (4) ◽  
pp. 602-607 ◽  
Author(s):  
Curtis A Parvin

Abstract Numerous outcome measures can be used to characterize and compare the performance of alternative quality-control (QC) strategies. The performance measure traditionally used in the QC planning process is the probability of rejecting an analytical run when a critical out-of-control error condition exists. Another performance measure that naturally fits within the total allowable error paradigm is the probability that a reported test result contains an analytical error that exceeds the total allowable error specification. In general, the out-of-control error conditions associated with the greatest chance of reporting an unacceptable test result are unrelated to the traditionally defined “critical” error conditions. If the probability of reporting an unacceptable test result is used as the primary performance measure, worst-case QC performance can be determined irrespective of the magnitude of any out-of-control error condition that may exist, thus eliminating the need for the concept of a “critical” out-of-control error.


2008 ◽  
Vol 54 (12) ◽  
pp. 2049-2054 ◽  
Author(s):  
Curtis A Parvin

Abstract Background: The traditional measure used to evaluate QC performance is the probability of rejecting an analytical run that contains a critical out-of-control error condition. The probability of rejecting an analytical run, however, is not affected by changes in QC-testing frequency. A different performance measure is necessary to assess the impact of the frequency of QC testing. Methods: I used a statistical model to define in-control and out-of-control processes, laboratory testing modes, and quality control strategies. Results: The expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition is a performance measure that is affected by changes in QC-testing frequency. I derived this measure for different out-of-control error conditions and laboratory testing modes and showed that a worst-case expected increase in the number of unacceptable patient results reported can be estimated. The laboratory thus has the ability to design QC strategies that limit the expected number of unacceptable patient results reported. Conclusions: To assess the impact of the frequency of QC testing on QC performance, it is necessary to move beyond thinking in terms of the probability of accepting or rejecting analytical runs. A performance measure based on the expected increase in the number of unacceptable patient results reported has the dual advantage of objectively assessing the impact of changes in QC-testing frequency and putting focus on the quality of reported patient results rather than the quality of laboratory batches.


Author(s):  
James O Westgard

The first essential in setting up internal quality control (IQC) of a test procedure in the clinical laboratory is to select the proper IQC procedure to implement, i.e. choosing the statistical criteria or control rules, and the number of control measurements, according to the quality required for the test and the observed performance of the method. Then the right IQC procedure must be properly implemented. This review focuses on strategies for planning and implementing IQC procedures in order to improve the quality of the IQC. A quantitative planning process is described that can be implemented with graphical tools such as power function or critical-error graphs and charts of operating specifications. Finally, a total QC strategy is formulated to minimize cost and maximize quality. A general strategy for IQC implementation is recommended that employs a three-stage design in which the first stage provides high error detection, the second stage low false rejection and the third stage prescribes the length of the analytical run, making use of an algorithm involving the average of normal patients' data.


2017 ◽  
Vol 63 (5) ◽  
pp. 1022-1030 ◽  
Author(s):  
Martín Yago

Abstract BACKGROUND QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. METHODS A statistical model was used to construct charts for the 1ks and X/χ2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. RESULTS 1ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X/χ2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. CONCLUSIONS Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision.


Author(s):  
Lokesh Kumar Sharma ◽  
Rashmi Rasi Datta ◽  
Neera Sharma

Abstract Objectives Stringent quality control is an essential requisite of diagnostic laboratories to deliver consistent results. Measures used to assess the performance of a clinical chemistry laboratory are internal quality control and external quality assurance scheme (EQAS). However, the number of errors cannot be measured by the above but can be quantified by sigma metrics. The sigma scale varies from 0 to 6 with “6” being the ideal goal, which is calculated by using total allowable error (TEa), bias, and precision. However, there is no proper consensus for setting a TEa goal, and influence of this limiting factor during routine laboratory practice and sigma calculation has not been adequately determined. The study evaluates the impact of the choice of TEa value on sigma score derivation and also describes a detailed structured approach (followed by the study laboratory) to determine the potential causes of errors causing poor sigma score. Materials and Methods The study was conducted at a clinical biochemistry laboratory of a central government tertiary care hospital. Internal and external quality control data were evaluated for a period of 5 months from October 2019 to February 2020. Three drugs (carbamazepine, phenytoin, and valproate) were evaluated on the sigma scale using two different TEa values to determine significant difference, if any. Statistical Analysis Bias was calculated using the following formula: Bias% = (laboratory EQAS result − peer group mean) × 100 / peer group mean Peer group mean sigma metric was calculated using the standard equation: Sigma value = TEa − bias / coefficient of variation (CV)%. Results Impressive sigma scores (> 3 sigma) for two out of three drugs were obtained with TEa value 25, while with TEa value 15, sigma score was distinctly dissimilar and warranted root cause analysis and corrective action plans to be implemented for both valproate and carbamazepine. Conclusions The current study evidently recognizes that distinctly different sigma values can be obtained, depending on the TEa values selected, and using the same bias and precision values in the sigma equation. The laboratories should thereby choose appropriate TEa goals and make judicious use of sigma metric as a quality improvement tool.


Author(s):  
Erin Polka ◽  
Ellen Childs ◽  
Alexa Friedman ◽  
Kathryn S. Tomsho ◽  
Birgit Claus Henn ◽  
...  

Sharing individualized results with health study participants, a practice we and others refer to as “report-back,” ensures participant access to exposure and health information and may promote health equity. However, the practice of report-back and the content shared is often limited by the time-intensive process of personalizing reports. Software tools that automate creation of individualized reports have been built for specific studies, but are largely not open-source or broadly modifiable. We created an open-source and generalizable tool, called the Macro for the Compilation of Report-backs (MCR), to automate compilation of health study reports. We piloted MCR in two environmental exposure studies in Massachusetts, USA, and interviewed research team members (n = 7) about the impact of MCR on the report-back process. Researchers using MCR created more detailed reports than during manual report-back, including more individualized numerical, text, and graphical results. Using MCR, researchers saved time producing draft and final reports. Researchers also reported feeling more creative in the design process and more confident in report-back quality control. While MCR does not expedite the entire report-back process, we hope that this open-source tool reduces the barriers to personalizing health study reports, promotes more equitable access to individualized data, and advances self-determination among participants.


2012 ◽  
Vol 23 (12) ◽  
pp. 1455-1460 ◽  
Author(s):  
Lisa Legault ◽  
Timour Al-Khindi ◽  
Michael Inzlicht

Self-affirmation produces large effects: Even a simple reminder of one’s core values reduces defensiveness against threatening information. But how, exactly, does self-affirmation work? We explored this question by examining the impact of self-affirmation on neurophysiological responses to threatening events. We hypothesized that because self-affirmation increases openness to threat and enhances approachability of unfavorable feedback, it should augment attention and emotional receptivity to performance errors. We further hypothesized that this augmentation could be assessed directly, at the level of the brain. We measured self-affirmed and nonaffirmed participants’ electrophysiological responses to making errors on a task. As we anticipated, self-affirmation elicited greater error responsiveness than did nonaffirmation, as indexed by the error-related negativity, a neural signal of error monitoring. Self-affirmed participants also performed better on the task than did nonaffirmed participants. We offer novel brain evidence that self-affirmation increases openness to threat and discuss the role of error detection in the link between self-affirmation and performance.


2022 ◽  
Vol 62 ◽  
pp. 270-285
Author(s):  
Tiago Coito ◽  
Miguel S.E. Martins ◽  
Bernardo Firme ◽  
João Figueiredo ◽  
Susana M. Vieira ◽  
...  

Author(s):  
Thomas Gerald O’Daniel

Abstract Background In certain patients there is an imbalance between the volume of the anterior neck and the mandibular confines that require reductional sculpting and repositioning of the hyoid to optimize neck lifting procedures. Objectives A quantitative volumetric analysis of impact of the management of supraplatysmal and subplatysmal structures of the neck by comparing surgical specimen was performed to determine the impact of reduction on cervical contouring. Methods In 152 patients undergoing deep cervicoplasty, the frequency of modification of each surgical maneuver and amount of supraplatysmal and subplatysmal volume removed was measured in cubic centimeters using a volume displacement technique. Results The mean volume of total volume remove from the supraplatysmal and subplatysmal planes during deep cervicoplasty was 22.3 cm3 with subplatysmal volume representing 73%. Subplatysmal volume was reduced in 152 patients. Deep fat was reduced in 96% of patients with mean volume of 7 cm3, submandibular glands (76%) with mean volume 6.5cm, anterior digastric muscles (70%) with mean volume 2cm3, peri-hyoid fascia (32%) with mean volume <1cm3 and mylohyoid reduction (14%) with mean volume < 1cm3 in the series. The anterior digastric muscles were plicated to reposition the hyoid in 34% of cases. Supraplatysmal fat reduction was 6.3 cm3 in 40% of patients. Conclusions The study provides a comprehensive analysis of the impact of volume modification of the central neck during deep cervicoplasty. This objective evaluation of neck volume may help guide clinicians in the surgical planning process and provide a foundation for optimizing cervicofacial rejuvenation techniques.


2021 ◽  
Author(s):  
Kuo Chen

Nowadays, Vietnamese students choose to study abroad in Asian countries, with Taiwan being one of the most appealing locations so far. The purpose of this research is to explain the planning process used by Vietnamese students to study abroad (the host country is Taiwan), as well as to suggest an appropriate model for students' decision-making once the desire to study abroad is established, in which the impact of career path on school selection is clarified and the importance of motivation to study abroad is emphasized.This research used a mixed-methods approach. In-depth interviews with 30 Vietnamese students studying in Taiwan are conducted using a qualitative methodology. The data gathered during those interviews is utilized to build questionnaires that will be sent to over 300 samples for quantitative study.The research findings demonstrate the primary elements influencing students' desire to study abroad, career planning, and decision-making in Taiwan, as well as the model of students' decision-making process. It is obvious that students' desire to study abroad has a direct effect on their career-planning factor, while this factor acts as a mediator between the aforementioned motivation and the students' decision-making factor.


Sign in / Sign up

Export Citation Format

Share Document