Specimen Labeling Errors: A Q-Probes Analysis of 147 Clinical Laboratories

2008 ◽  
Vol 132 (10) ◽  
pp. 1617-1622 ◽  
Author(s):  
Elizabeth A. Wagar ◽  
Ana K. Stankovic ◽  
Stephen Raab ◽  
Raouf E. Nakhleh ◽  
Molly K. Walsh

Abstract Context.—Accurate specimen identification is critical for quality patient care. Improperly identified specimens can result in delayed diagnosis, additional laboratory testing, treatment of the wrong patient for the wrong disease, and severe transfusion reactions. Specimen identification errors have been reported to occur at rates of 0.1% to 5%. Objective.—To determine the frequency of labeling errors in a multi-institutional survey. Design.—Labeling errors were categorized as: (1) mislabeled, (2) unlabeled, (3) partially labeled, (4) incompletely labeled, and (5) illegible label. Blood specimens for routine or stat chemistry, hematology, and coagulation testing were included. Labeling error rates were calculated for each participant and tested for associations with institutional demographic and practice variable information. Results.—More than 3.3 million specimen labels were reviewed by 147 laboratories. Labeling errors were identified at a rate of 0.92 per 1000 labels. Two variables were statistically associated with lower labeling error rates: (1) laboratories with current, ongoing quality monitors for specimen identification (P = .008) and (2) institutions with 24/7 phlebotomy services for inpatients (P = .02). Most institutions had written policies for specimen labeling at the bedside or in outpatient phlebotomy areas (96% and 98%, respectively). Allowance of relabeling of blood specimens by primary collecting personnel was reported by 42% of institutions. Conclusions.—Laboratories actively engaged in ongoing specimen labeling quality monitors had fewer specimen labeling errors. Also, 24/7 phlebotomy services were associated with lower specimen error rates. Establishing quality metrics for specimen labeling and deploying 24/7 phlebotomy operations may contribute to improving the accuracy of specimen labeling for the clinical laboratory.

2010 ◽  
Vol 134 (8) ◽  
pp. 1108-1115 ◽  
Author(s):  
Erin Grimm ◽  
Richard C. Friedberg ◽  
David S. Wilkinson ◽  
James P. AuBuchon ◽  
Rhona J. Souers ◽  
...  

Abstract Context.—Although a rare occurrence, ABO incompatible transfusions can cause patient morbidity and mortality. Up to 20% of all mistransfusions are traced to patient misidentification and/or sample mislabeling errors that occur before a sample arrives in the laboratory. Laboratories play a significant role in preventing mistransfusion by identifying wrong blood in tube and rejecting mislabeled samples. Objectives.—To determine the rates of mislabeled samples and wrong blood in tube for samples submitted for ABO typing and to survey patient identification and sample labeling practices and sample acceptance policies for ABO typing samples across a variety of US institutions. Design.—One hundred twenty-two institutions prospectively reviewed inpatient and outpatient samples submitted for ABO typing for 30 days. Labeling error rates were calculated for each participant and tested for associations with institutional demographic and practice variable information. Wrong-blood-in-tube rates were calculated for the 30-day period and for a retrospective 12-month period. A concurrent survey collected institution-specific sample labeling requirements and institutional policies regarding the fate of mislabeled samples. Results.—For all institutions combined, the aggregate mislabeled sample rate was 1.12%. The annual and 30-day wrong-blood-in-tube aggregate rates were both 0.04%. Patient first name, last name, and unique identification number were required on the sample by more than 90% of participating institutions; however, other requirements varied more widely. Conclusions.—The rates of mislabeled samples and wrong blood in tube for US participants in this study were comparable to those reported for most European countries. The survey of patient identification and sample labeling practices and sample acceptance policies for ABO typing samples revealed both practice uniformity and variability as well as significant opportunity for improvement.


2018 ◽  
Vol 13 (2) ◽  
pp. 343-347 ◽  
Author(s):  
Jeffrey A. Kraut ◽  
Nicolaos E. Madias

A reliable determination of blood pH, PCO2, and [HCO3−] is necessary for assessing the acid-base status of a patient. However, most acid-base disorders are first recognized through abnormalities in serum total CO2 concentration ([TCO2]) in venous blood, a surrogate for [HCO3−]. In screening patients on the basis of serum [TCO2], we have been concerned about the wide limits of normal for serum [TCO2], 10–13 mEq/L, reported by many clinical laboratories. Indeed, we have encountered patients with serum [TCO2] values within the lower or upper end of the normal range of the reporting laboratory, who subsequently were shown to have a cardinal acid-base disorder.Here, we present a patient who had a serum [TCO2] within the lower end of the normal range of the clinical laboratory, which resulted in delayed diagnosis of a clinically important “hidden” acid-base disorder. To better define the appropriate limits of normal for serum [TCO2], we derived the expected normal range in peripheral venous blood in adults at sea level from carefully conducted acid-base studies. We then compared this range, 23 to 30 mEq/L, to that reported by 64 clinical laboratories, 2 large commercial clinical laboratories, and the major textbook of clinical chemistry. For the most part, the range in the laboratories we queried was substantially different than that we derived and that published in the textbook, with some laboratories reporting values as low as 18–20 mEq/L and as high as 33–35 mEq/L. We conclude that the limits of values of serum [TCO2] reported by clinical laboratories are very often inordinately wide and not consistent with the range of normal expected in healthy individuals at sea level. We suggest that the limits of normal of serum [TCO2] at sea level be tightened to 23–30 mEq/L. Such correction will ensure recognition of the majority of “hidden” acid-base disorders.


2006 ◽  
Vol 130 (8) ◽  
pp. 1106-1113 ◽  
Author(s):  
Paul N. Valenstein ◽  
Stephen S. Raab ◽  
Molly K. Walsh

Abstract Context.—Misidentified laboratory specimens may cause patient injury, but their frequency in general laboratory practice is unknown. Objectives.—To determine (1) the frequency of identification errors detected before and after result verification, (2) the frequency of adverse patient events due to specimen misidentification, and (3) factors associated with lower error rates and better detection of errors. Design.—One hundred twenty clinical laboratories provided information about identification errors during 5 weeks. Results.—In aggregate, 85% of errors were detected before results were released; one quarter of laboratories identified more than 95% of errors before result verification. The overall rate of patient identification errors involving released results was 55 errors per 1 000 000 billable tests. A total of 345 adverse events were reported. Most of the adverse events caused material inconvenience to the patients but did not result in any permanent harm. On average, adverse events resulted from 1 of every 18 identification errors. Extrapolating the adverse event rate observed in this study to all United States hospital-based laboratories suggests that more than 160 000 adverse events per year result from misidentification of patients' laboratory specimens. Conclusions.—Identification errors are common in laboratory medicine, but most are detected before results are released, and only a fraction are associated with adverse patient events. Even when taking into consideration the design of this study, which used imperfect case finding, institutions that did a better job of detecting errors within the laboratory released a smaller proportion of results that involved specimen misidentification.


2021 ◽  
Vol 9 (5) ◽  
pp. 950
Author(s):  
Chiara Sodini ◽  
Elena Mariotti Zani ◽  
Francesco Pecora ◽  
Cristiano Conte ◽  
Viviana Dora Patianna ◽  
...  

In most cases, infection due to Bartonella henselae causes a mild disease presenting with a regional lymphadenopathy frequently associated with a low-grade fever, headache, poor appetite and exhaustion that spontaneously resolves itself in a few weeks. As the infection is generally transmitted by cats through scratching or biting, the disease is named cat scratch disease (CSD). However, in 5–20% of cases, mainly in immunocompromised patients, systemic involvement can occur and CSD may result in major illness. This report describes a case of systemic CSD diagnosed in an immunocompetent 4-year-old child that can be used as an example of the problems that pediatricians must solve to reach a diagnosis of atypical CSD. Despite the child’s lack of history suggesting any contact with cats and the absence of regional lymphadenopathy, the presence of a high fever, deterioration of their general condition, increased inflammatory biomarkers, hepatosplenic lesions (i.e., multiple abscesses), pericardial effusion with mild mitral valve regurgitation and a mild dilatation of the proximal and medial portion of the right coronary artery, seroconversion for B. henselae (IgG 1:256) supported the diagnosis of atypical CSD. Administration of oral azithromycin was initiated (10 mg/kg/die for 3 days) with a progressive normalization of clinical, laboratory and US hepatosplenic and cardiac findings. This case shows that the diagnosis of atypical CSD is challenging. The nonspecific, composite and variable clinical features of this disease require a careful evaluation in order to achieve a precise diagnosis and to avoid both a delayed diagnosis and therapy with a risk of negative evolution.


Biomedicines ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 844
Author(s):  
Armando Tripodi

Lupus anticoagulant (LA) is one of the three laboratory parameters (the others being antibodies to either cardiolipin or β2-glycoprotein I) which defines the rare but potentially devastating condition known as antiphospholipid syndrome (APS). Testing for LA is a challenging task for the clinical laboratory because specific tests for its detection are not available. However, proper LA detection is paramount for patients’ management, as its persistent positivity in the presence of (previous or current) thrombotic events, candidate for long term anticoagulation. Guidelines for LA detection have been established and updated over the last two decades. Implementation of these guidelines across laboratories and participation to external quality assessment schemes are required to help standardize the diagnostic procedures and help clinicians for appropriate management of APS. This article aims to review the current state of the art and the challenges that clinical laboratories incur in the detection of LA.


2020 ◽  
Vol 58 (9) ◽  
pp. 1489-1497 ◽  
Author(s):  
Lisa K. Peterson ◽  
Anne E. Tebo ◽  
Mark H. Wener ◽  
Susan S. Copple ◽  
Marvin J. Fritzler

AbstractBackgroundThe indirect immunofluorescence assay (IFA) using HEp-2 cell substrates is the preferred method by some for detecting antinuclear antibodies (ANA) as it demonstrates a number of characteristic staining patterns that reflect the cellular components bound as well as semi-quantitative results. Lack of harmonized nomenclature for HEp-2 IFA patterns, subjectivity in interpretation and variability in the number of patterns reported by different laboratories pose significant harmonization challenges. The main objectives of this study were to assess current practice in laboratory assessment of HEp-2 IFA, identify gaps and define strategies to improve reading, interpretation and reporting.MethodsWe developed and administered a 24-item survey based on four domains: educational and professional background of participants, current practice of HEp-2 IFA testing and training, gap assessment and the perceived value of International Consensus on Antinuclear Antibody Patterns (ICAP) and other factors in HEp-2 IFA assessment. The Association of Medical Laboratory Immunologists (AMLI) and American Society for Clinical Pathology administered the survey from April 1 to June 30, 2018, to members involved in ANA testing. This report summarizes the survey results and discussion from a dry workshop held during the 2019 AMLI annual meeting.ResultsOne hundred and seventy-nine (n = 179) responses were obtained where a significant number were clinical laboratory scientists (46%), laboratory directors (24%), supervisors (13%) or others (17%). A majority of respondents agreed on the need to standardize nomenclature and reporting of HEp-2 IFA results. About 55% were aware of the ICAP initiative; however, among those aware, a significant majority thought its guidance on HEp-2 IFA nomenclature and reporting is of value to clinical laboratories. To improve ICAP awareness and further enhance HEp-2 IFA assessment, increased collaboration between ICAP and the clinical laboratory community was suggested with emphasis on education and availability of reference materials.ConclusionsBased on these suggestions, future efforts to optimize HEp-2 IFA reading, interpretation and reporting would benefit from more hands-on training of laboratory personnel as well as continuous collaboration between professional organizations, in vitro diagnostic manufacturers and clinical laboratories.


2008 ◽  
Vol 132 (2) ◽  
pp. 206-210
Author(s):  
Paul N. Valenstein ◽  
Molly K. Walsh ◽  
Ana K. Stankovic

Abstract Context.—Errors entering orders for send-out laboratory tests into computer systems waste health care resources and can delay patient evaluation and management. Objectives.—To determine (1) the accuracy of send-out test order entry under “real world” conditions and (2) whether any of several practices are associated with improved order accuracy. Design.—Representatives from 97 clinical laboratories provided information about the processes they use to send tests to reference facilities and their order entry and specimen routing error rates. Results.—In aggregate, 98% of send-out tests were correctly ordered and 99.4% of send-out tests were routed to the proper reference laboratory. There was wide variation among laboratories in the rate of send-out test order entry errors. In the bottom fourth of laboratories, more than 5% of send-out tests were ordered incorrectly, while in the top fourth of laboratories fewer than 0.3% of tests were ordered incorrectly. Order entry errors were less frequent when a miscellaneous test code was used than when a specific test code was used (3.9% vs 5.6%; P = .003). Conclusions.—Computer order entry errors for send-out tests occur approximately twice as frequently as order entry errors for other types of tests. Filing more specific test codes in a referring institution's information system is unlikely to reduce order entry errors and may make error rates worse.


2009 ◽  
Vol 133 (6) ◽  
pp. 942-949
Author(s):  
Paul N. Valenstein ◽  
Ana K. Stankovic ◽  
Rhona J. Souers ◽  
Frank Schneider ◽  
Elizabeth A. Wagar

Abstract Context.—A variety of document control practices are required of clinical laboratories by US regulation, laboratory accreditors, and standard-setting organizations. Objective.—To determine how faithfully document control is being implemented in practice and whether particular approaches to document control result in better levels of compliance. Design.—Contemporaneous, structured audit of 8814 documents used in 120 laboratories for conformance with 6 generally accepted document control requirements: available, authorized, current, reviewed by management, reviewed by staff, and archived. Results.—Of the 8814 documents, 3113 (35%) fulfilled all 6 document control requirements. The requirement fulfilled most frequently was availability of the document at all shifts and locations (8564 documents; 97%). Only 4407 (50%) of documents fulfilled Clinical Laboratory Improvement Amendment requirements for being properly archived after updating or discontinuation. Policies and procedures were more likely to fulfill document control requirements than forms and work aids. Documents tended to be better controlled in some laboratory sections (eg, transfusion service) than in others (eg, microbiology and client services). We could not identify document control practices significantly associated with higher compliance rates. Conclusions.—Most laboratories are not meeting regulatory and accreditation requirements related to control of documents. It is not clear whether control failures have any impact on the quality of laboratory results or patient outcomes.


1976 ◽  
Vol 24 (1) ◽  
pp. 202-210 ◽  
Author(s):  
D A Cotter ◽  
B H Sage

As part of the installation procedure of the LARC leukocyte differential classifier in a clinical laboratory, a 100-slide protocol is carried out to establish the performance of the classifier in the laboratory. The detailed make-up of this protocol and its relationship to key performance parameters for the leukocyte differential are described in detail. Data from the first ten of these protocols are presented which establish the (a) normal ranges, (b) reproducibility, (c) accuracy, (d) false-positive/false-negative rates for the detection of left shifts and (e) false-positive/false-negative rates for the detection of bloods with abnormal cells.


1998 ◽  
Vol 36 (10) ◽  
pp. 2996-3001 ◽  
Author(s):  
Yao-Shen Chen ◽  
S. A. Marshall ◽  
P. L. Winokur ◽  
S. L. Coffman ◽  
W. W. Wilke ◽  
...  

Modified MicroScan gram-positive MIC no. 8 panels (PM-8) were analyzed for their improved ability to detect vancomycin resistance (VR) and high-level aminoglycoside resistance (HLAR) in enterococci. A validation study design that utilized selected challenge strains, recent clinical isolates, and reproducibility experiments in a multicenter format was selected. Three independent medical centers compared the commercial panels to reference broth microdilution panels (RBM) and Synergy Quad Agar (QA). Resistance was verified by demonstration of VR and HLAR genes by PCR tests. The study was conducted in three phases. (i) In the challenge phase (CP), two well-characterized sets of enterococci were obtained from the Centers for Disease Control and Prevention; one set contained 50 isolates for VR testing and one contained 48 isolates for HLAR testing. In addition, a set of 47 well-characterized isolates representing diverse geographic areas, obtained from earlier national surveillance studies, was tested at the University of Iowa College of Medicine (UICM). (ii) In the efficacy phase (EP), each laboratory tested 50 recent, unique clinical isolates by all methods. (iii) In the reproducibility Phase (RP), each laboratory tested the same 10 strains by all methods in triplicate on three separate days. All isolates from the EP were sent to the UICM for molecular characterization of vanA, -B, -C1 , -C2–3 , and HLAR genes. In the CP, the ranking of test methods by error rates (in parentheses; very major and major errors combined, versus PCR results) were as follows: for high-level streptomycin resistance (HLSR), QA (12.0%) > PM-8 (5.2%) > RBM (1.6%); for high-level gentamicin resistance (HLGR), RBM (3.7%) > PM-8 (3.1%) > QA (2.6%); and for VR, RBM = QA (3.0%) > PM-8 (1.2%). In the EP, agreement between all methods and the reference PCR result was 98.0% for HLSR, 99.3% for HLGR, and 98.6% for VR. In the RP, the percentages of results ± 1 log2 dilution of the all-participant mode were as follows: for VR, 100% (PM-8), 98.9% (QA), and 90.0% (RBM); for HLSR, 99.6% (RBM), 98.5% (PM-8), and 82.2% (QA); and for HLGR, 99.6% (RBM), 99.3% (PM-8), and 98.1% (QA). The ability of the PM-8 to detect VR and HLAR in enterococci was comparable to those for reference susceptibility and molecular PCR methods and was considered acceptable for routine clinical laboratory use.


Sign in / Sign up

Export Citation Format

Share Document