scholarly journals A Working Model to Inform Risk-Based Back to Work Strategies

Author(s):  
Kristen Meier ◽  
Kirsten J. Curnow ◽  
Darcy Vavrek ◽  
John Moon ◽  
Kyle Farh ◽  
...  

ABSTRACTBackgroundThe coronavirus disease 2019 (COVID-19) pandemic has forced many businesses to close or move to remote work to reduce the potential spread of disease. Employers desiring a return to onsite work want to understand their risk for having an infected employee on site and how best to mitigate this risk. Here, we modelled a range of key metrics to help inform return to work policies and procedures, including evaluating the benefit and optimal design of a SARS-CoV-2 employee screening program.MethodsWe modeled a range of input variables including prevalence of COVID-19, time infected, number of employees, test sensitivity and specificity, test turnaround time, number of times tested within the infectious period, and sample pooling. We modeled the impact of these input variables on several output variables: number of healthy employees; number of infected employees; number of test positive and test negative employees; number of true positive, false positive, true negative, and false negative employees; positive and negative predictive values; and time an infected, potentially contagious employee is on site.ResultsWe show that an employee screening program can reduce the risk for onsite transmission across different prevalence values and group sizes. For example, at a pre-test asymptomatic community prevalence of 0.5% (5 in 1000) with an employee group size of 500, the risk for at least one infected employee on site is 91.8%, with 3 asymptomatic infected employees predicted within those 500 employees. Implementing a SARS-CoV-2 baseline screen with an 80% sensitivity and 99.5% specificity would reduce the risk of at least one infected employee on site to 39.4% and the predicted number of infected employees onsite (false negatives) to 1. Repetitive testing is required for ongoing vigilance of onsite employees. The expected number of days an infected employee is on site depends on test sensitivity, testing interval, and turnaround time. If the test interval is longer than the infectious period (∼14 days for COVID-19), testing will not detect the infected employee. Sample pooling reduces the number of tests performed, thereby reducing testing costs. However, the pooling methodology (eg, 1-stage vs 2-stage pooling, pool size) will impact the number of employees that screen positive, thereby affected the number of employees eligible to return to onsite work.ConclusionsThe modeling presented here can be used to help employers understand their risk for having an infected employee on site. Further, it details how an employee screening program can reduce this risk and shows how screening performance and frequency impact the effectiveness of a screening program. The primary factors determining the effectiveness of a screening program are test sensitivity and frequency of testing.DisclaimerThis publication is offered to businesses/employers as a model of potential risk arising from COVID19 in the workplace. While believed to be based on reliable data, the model described herein has not been prospectively validated and should not be relied upon for any purpose other than as an aid to understand the potential impacts of a number of variables on the risk of having COVID19 positive employees on a worksite. Decisions related to workplace safety; COVID19 related workplace testing; programs and procedures should be based upon your actual data and applicable laws and public health orders.

Author(s):  
Emma L. Davis ◽  
Tim C. D. Lucas ◽  
Anna Borlase ◽  
Timothy M Pollington ◽  
Sam Abbot ◽  
...  

AbstractBackgroundFollowing a consistent decline in COVID-19-related deaths in the UK throughout May 2020, it is recognised that contact tracing will be vital to relaxing physical distancing measures. The increasingly evident role of asymptomatic and pre-symptomatic transmission means testing is central to control, but test sensitivity estimates are as low as 65%.MethodsWe extend an existing UK-focused branching process model for contact tracing, adding diagnostic testing and refining parameter estimates to demonstrate the impact of poor test sensitivity and suggest mitigation methods. We also investigate the role of super-spreading events, providing estimates of the relationship between infections, cases detected and hospitalisations, and consider how tracing coverage and speed affects outbreak risk.FindingsIncorporating poor sensitivity testing into tracing protocols could reduce efficacy, due to false negative results impacting isolation duration. However, a 7-day isolation period for all negative-testing individuals could mitigate this effect. Similarly, reducing delays to testing following exposure has a negligible impact on the risk of future outbreaks, but could undermine control if negative-testing individuals immediately cease isolating. Even 100% tracing of contacts will miss cases, which could prompt large localised outbreaks if physical distancing measures are relaxed prematurely.InterpretationIt is imperative that test results are interpreted with caution due to high false-negative rates and that contact tracing is used in combination with physical distancing measures. If the risks associated with imperfect test sensitivity are mitigated, we find that contact tracing can facilitate control when the reproduction number with physical distancing, RS, is less than 1·5.


Author(s):  
Ron M Kagan ◽  
Amy A Rogers ◽  
Gwynngelle A Borillo ◽  
Nigel J Clarke ◽  
Elizabeth M Marlowe

Abstract Background The use of a remote specimen collection strategy employing a kit designed for unobserved self-collection for SARS-CoV-2 RT-PCR can decrease the use of PPE and exposure risk. To assess the impact of unobserved specimen self-collection on test performance, we examined results from a SARS-CoV-2 qualitative RT-PCR test for self-collected specimens from participants in a return-to-work screening program and assessed the impact of a pooled testing strategy in this cohort. Methods Self-collected anterior nasal swabs from employee return to work programs were tested using the Quest Diagnostics SARS-CoV-2 RT-PCR EUA. The Ct values for the N1 and N3 N-gene targets and a human RNase P (RP) gene control target were tabulated. For comparison, we utilized Ct values from a cohort of HCP-collected specimens from patients with and without COVID-19 symptoms. Results Among 47,923 participants, 1.8% were positive. RP failed to amplify for 13/115,435 (0.011%) specimens. The median (IQR) Cts were 32.7 (25.0-35.7) for N1 and 31.3 (23.8-34.2) for N3. Median Ct values in the self-collected cohort were significantly higher than those of symptomatic, but not asymptomatic patients. Based on Ct values, pooled testing with 4 specimens would have yielded inconclusive results in 67/1,268 (5.2%) specimens but only a single false-negative result. Conclusions Unobserved self-collection of nasal swabs provides adequate sampling for SARS-CoV-2 RT-PCR testing. These findings alleviate concerns of increased false negatives in this context. Specimen pooling could be used for this population as the likelihood of false negative results is very low due when using a sensitive, dual-target methodology.


2014 ◽  
Vol 24 (2) ◽  
pp. 238-246 ◽  
Author(s):  
Enora Laas ◽  
Mathieu Luyckx ◽  
Marjolein De Cuypere ◽  
Frederic Selle ◽  
Emile Daraï ◽  
...  

ObjectiveComplete tumor cytoreduction seems to be beneficial for patients with recurrent epithelial ovarian cancer (REOC). The challenge is to identify patients eligible for such surgery. Several scores based on simple clinical parameters have attempted to predict resectability and help in patient selection for surgery in REOC.The aims of this study were to assess the performance of these models in an independent population and to evaluate the impact of complete resection.Materials and MethodsA total of 194 patients with REOC between January 2000 and December 2010 were included in 2 French centers. Two scores were used: the AGO DESKTOP OVAR trial score and a score from Tian et al.The performance (sensitivity, specificity, and predictive values) of these scores was evaluated in our population. Survival curves were constructed to evaluate the survival impact of surgery on recurrence.ResultsPositive predictive values for complete resection were 80.6% and 74.0% for the DESKTOP trial score and the Tian score, respectively. The false-negative rate was high for both models (65.4% and 71.4%, respectively). We found a significantly higher survival in the patients with complete resection (59.4 vs 17.9 months,P< 0.01) even after adjustment for the confounding variables (hazard ratio [HR], 2.53; 95% confidence interval, 1.01–6.3;P= 0.04).ConclusionsIn REOC, surgery seems to have a positive impact on survival, if complete surgery can be achieved. However, factors predicting complete resection are not yet clearly defined. Recurrence-free interval and initial resection seem to be the most relevant factors. Laparoscopic evaluation could help to clarify the indications for surgery.


1996 ◽  
Vol 15 (1) ◽  
pp. 1-44 ◽  
Author(s):  
Mildred S. Christian ◽  
Robert M. Diener

An extensive computer search was conducted, and a comprehensive overview of the current status of alternatives to animal eye irritation tests was obtained. A search of Medline and Toxline databases (1988 to present) was supplemented with references from sources regarding in vitro eye irritation. Particular attention was paid to soap and detergent products and related ingredients. Eighty-five references are included in the review; the in vitro assays are categorized, and their predictive values for assessing acute ocular irritation are evaluated and compared with the Draize rabbit eye irritation assay and with each other. The present review shows that the increased activity of scientists from academia, industry, and regulatory agencies has resulted in substantial progress in developing alternative in vitro procedures and that a number of large, interlaboratory evaluations and international workshops have assisted in the selection process. However, none of these methodologies has obtained acceptance for regulatory classification purposes. Conclusions drawn from this review include that (a) no single in vitro assay is considered capable of replacing the Draize eye irritation test; (b) the chorioallantoic membrane vascular assay (CAMVA) or the hen egg test-chorio-allantoic membrane test (HET-CAM), the chicken or bovine enucleated eye test, the neutral red and plasminogen activation assays for cytotoxicity, and the silicon microphysiometer appear to have the greatest potential as screening tools for eye irritation; and (c) choosing a specific assay or series of assays will depend on the type of agent tested and the impact of false-negative or false-positive results. New assays will continue to be developed and should be included in future evaluations, when sufficient data are available.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248783 ◽  
Author(s):  
Gregory D. Lyng ◽  
Natalie E. Sheils ◽  
Caleb J. Kennedy ◽  
Daniel O. Griffin ◽  
Ethan M. Berke

Background COVID-19 test sensitivity and specificity have been widely examined and discussed, yet optimal use of these tests will depend on the goals of testing, the population or setting, and the anticipated underlying disease prevalence. We model various combinations of key variables to identify and compare a range of effective and practical surveillance strategies for schools and businesses. Methods We coupled a simulated data set incorporating actual community prevalence and test performance characteristics to a susceptible, infectious, removed (SIR) compartmental model, modeling the impact of base and tunable variables including test sensitivity, testing frequency, results lag, sample pooling, disease prevalence, externally-acquired infections, symptom checking, and test cost on outcomes including case reduction and false positives. Findings Increasing testing frequency was associated with a non-linear positive effect on cases averted over 100 days. While precise reductions in cumulative number of infections depended on community disease prevalence, testing every 3 days versus every 14 days (even with a lower sensitivity test) reduces the disease burden substantially. Pooling provided cost savings and made a high-frequency approach practical; one high-performing strategy, testing every 3 days, yielded per person per day costs as low as $1.32. Interpretation A range of practically viable testing strategies emerged for schools and businesses. Key characteristics of these strategies include high frequency testing with a moderate or high sensitivity test and minimal results delay. Sample pooling allowed for operational efficiency and cost savings with minimal loss of model performance.


2012 ◽  
Vol 30 (15_suppl) ◽  
pp. 543-543 ◽  
Author(s):  
Frederique Madeleine Penault-Llorca ◽  
Aicha Goubar ◽  
Ines Raoelfils ◽  
Christine Sagan ◽  
Magali Lacroix-Triki ◽  
...  

543 Background: Several reports suggest an inter-observer variability in Ki67 assessment. Nevertheless, there is no large study that evaluates the rate of discrepancies, together with their impact. Methods: Ki67 expression was assessed on 663 samples from patients with ER-positive breast cancers included in the PACS01 trial (Roche, J Clin Oncol, 2006). Ki67 staining was done using MiB1 antibody (Dako, Copenhagen, Denmark, Dilution 1:250). Prognostic and predictive values have been reported previously (Penault-Llorca, J Clin Oncol, 2009). A second central review was done by a senior breast pathologist from a French Cancer center. A discrepancy was defined as either a false positive or false negative result. Cut-off for positivity was defined at 15% according to data from Cheang et al (JNCI, 2009). Results: The rate of discrepancy was correlated with the percentage of stained tumor cells. A 10% discrepancy rate between the 2 pathologists was observed when the first pathologist reported <10% tumor cells stained. Same low rate of discrepancy (10%) was observed if more than 30% cancer cells were stained according to the first assessment. At the opposite, discrepancy rates were 47%, 45%, 22%, 34% when the first pathologist reported 10-15%, 15-20%, 20-25% and 25-30% tumor cells stained. Overall, 36% of the patients presented a grade II tumor together with Ki67 <10% or >30%. We then evaluated the impact of discrepancy in terms of prognosis. Patients presenting a concordant result between the two pathologists showed a better outcome as compared to patients presenting a discrepancy, independently to the percentage of tumor stained (p=0.05, patients with concordant Ki67 ranged between 10-30% versus those with one reader <10% and the other one >10%-<30%). Conclusions: Discrepancy rates between pathologists are acceptable when Ki67 is either <10% or >30%. A Ki67 ranged between 10 and 30% could define a grey zone in which Ki67 should be reported with caution or be double checked by another pathologist. Survival analysis suggested that inter-observer discrepancies could act a prognostic factor. Such finding could reflect underlying intra-tumor heterogeneity.


2020 ◽  
Vol 58 (9) ◽  
Author(s):  
M. Jana Broadhurst ◽  
Shefali Dujari ◽  
Indre Budvytiene ◽  
Benjamin A. Pinsky ◽  
Carl A. Gold ◽  
...  

ABSTRACT The impact of diagnostic stewardship and testing algorithms on the utilization and performance of the FilmArray meningitis/encephalitis (ME) panel has received limited investigation. We performed a retrospective single-center cohort study assessing all individuals with suspected ME between February 2017 and April 2019 for whom the ME panel was ordered. Testing was restricted to patients with cerebrospinal fluid (CSF) pleocytosis. Positive ME panel results were confirmed before reporting through correlation with direct staining (Gram and calcofluor white) and CSF cryptococcal antigen or by repeat ME panel testing. Outcomes included the ME panel test utilization rate, negative predictive value of nonpleocytic CSF samples, test yield and false-positivity rate, and time to appropriate deescalation of acyclovir. Restricting testing to pleocytic CSF samples reduced ME panel utilization by 42.7% (263 versus 459 tests performed) and increased the test yield by 61.8% (18.6% versus 11.5% positivity rate; P < 0.01) with the application of criteria. The negative predictive values of a normal CSF white blood cell (WBC) count for ME panel targets were 100% (195/195) for nonviral targets and 98.0% (192/196) overall. All pathogens detected in nonpleocytic CSF samples were herpesviruses. The application of a selective testing algorithm based on repeat testing of nonviral targets avoided 75% (3/4) of false-positive results without generating false-negative results. The introduction of the ME panel reduced the duration of acyclovir treatment from an average of 66 h (standard deviation [SD], 43 h) to 46 h (SD, 36 h) (P = 0.03). The implementation of the ME panel with restriction criteria and a selective testing algorithm for nonviral targets optimizes its utilization, yield, and accuracy.


Author(s):  
MCJ Bootsma ◽  
ME Kretzschmar ◽  
G Rozhnova ◽  
JAP Heesterbeek ◽  
JAJW Kluytmans ◽  
...  

AbstractBackgroundTo limit societal and economic costs of lockdown measures, public health strategies are needed that control the spread of SARS-CoV-2 and simultaneously allow lifting of disruptive measures. Regular universal random screening of large proportions of the population regardless of symptoms has been proposed as a possible control strategy.MethodsWe developed a mathematical model that includes test sensitivity depending on infectiousness for PCR-based and antigen-based tests, and different levels of onward transmission for testing and non-testing parts of the population. Only testing individuals participate in high-risk transmission events, allowing more transmission in case of unnoticed infection. We calculated the required testing interval and coverage to bring the effective reproduction number due to universal random testing (Rrt) below 1, for different scenarios of risk behavior of testing and non-testing individuals.FindingsWith R0 = 2.5, lifting all control measures for tested subjects with negative test results would require 100% of the population being tested every three days with a rapid test method with similar sensitivity as PCR-based tests. With remaining measures in place reflecting Re = 1.3, 80% of the population would need to be tested once a week to bring Rrt below 1. With lower proportions tested and with lower test sensitivity, testing frequency should increase further to bring Rrt below 1. With similar Re values for tested and non-tested subjects, and with tested subjects not allowed to engage in higher risk events, at least 80% of the populations needs to test every five days to bring Rrt below. The impact of the test-sensitivity on the reproduction number is far less than the frequency of testing.InterpretationRegular universal random screening followed by isolation of infectious individuals is not a viable strategy to reopen society after controlling a pandemic wave of SARS-CoV-2. More targeted screening approaches are needed to better use rapid testing such that it can effectively complement other control measures.FundingRECOVER (H2020-101003589) (MJMB), ZonMw project 10430022010001 (MK, HH), FCT project 131_596787873 (GR). ZonMw project 91216062 (MK)


Author(s):  
Gregory D. Lyng ◽  
Natalie E. Sheils ◽  
Caleb J. Kennedy ◽  
Daniel Griffin ◽  
Ethan M. Berke

ABSTRACTBackgroundCOVID-19 test sensitivity and specificity have been widely examined and discussed yet optimal use of these tests will depend on the goals of testing, the population or setting, and the anticipated underlying disease prevalence. We model various combinations of key variables to identify and compare a range of effective and practical surveillance strategies for schools and businesses.MethodsWe coupled a simulated data set incorporating actual community prevalence and test performance characteristics to a susceptible, infectious, removed (SIR) compartmental model, modeling the impact of base and tunable variables including test sensitivity, testing frequency, results lag, sample pooling, disease prevalence, externally-acquired infections, and test cost on outcomes case reduction.ResultsIncreasing testing frequency was associated with a non-linear positive effect on cases averted over 100 days. While precise reductions in cumulative number of infections depended on community disease prevalence, testing every 3 days versus every 14 days (even with a lower sensitivity test) reduces the disease burden substantially. Pooling provided cost savings and made a high-frequency approach practical; one high-performing strategy, testing every 3 days, yielded per person per day costs as low as $1.32.ConclusionsA range of practically viable testing strategies emerged for schools and businesses. Key characteristics of these strategies include high frequency testing with a moderate or high sensitivity test and minimal results delay. Sample pooling allowed for operational efficiency and cost savings with minimal loss of model performance.


2010 ◽  
Vol 104 (08) ◽  
pp. 402-409 ◽  
Author(s):  
Michela Cini ◽  
Caterina Pili ◽  
Ottavio Boggian ◽  
Mirella Frascaro ◽  
Gualtiero Palareti ◽  
...  

SummaryHeparin-induced thrombocytopenia (HIT) is a life-threatening complication of heparin treatment; the prognosis depends on early and accurate diagnosis, and prompt start of alternative anticoagulants. Because of high sensitivity, the commercially available immunologic assays are widely used, though not suited to be run on single samples and with a turnaround time of 2–3 hours. We evaluated two new, rapid, automated, semi-quantitative chemiluminescent immunoassays in HIT suspected patients: HemosIL® AcuStar HIT-IgG(PF4-H) (specific for IgG anti- PF4/heparin antibodies) and HemosIL® AcuStar HIT-Ab(PF4-H) (detecting IgG, IgM and IgA anti-PF4/heparin antibodies) (both from Instrumentation Laboratory). A total of 102 patients with suspected HIT were included; HIT was diagnosed in 17 (16.7%). No false negative cases were observed using either the HemosIL AcuStar HIT-IgG(PF4-H) or the HITAb(PF4-H) assay (sensitivity and negative predictive values = 100%; negative likelihood ratios <0.01). The specificity was higher for the He-mosIL AcuStar HIT-IgG(PF4-H) in comparison with that of the HemosIL AcuStar HIT-Ab(PF4-H) (96.5% vs. 81.2%). Higher values of the HemosIL AcuStar HIT-IgG(PF4-H) were associated with increased probability of HIT. Patients with confirmed HIT and thrombotic complications had significantly higher levels of HemosIL AcuStar HIT-IgG(PF4-H) than those without thrombotic complications. The HemosIL AcuStar HIT-IgG(PF4-H) and HIT-Ab(PF4-H) assays showed a very high sensitivity, and therefore they can reliably be used to rule out HIT in suspected patients. The diagnostic specificity was greatly increased by using the HemosIL AcuStar HITIgG(PF4-H). Both the assays are reproducible (CVs <6%), rapid (turnaround time 30 minutes), automated, and semi-quantitative, and they can be run for single sample testing.


Sign in / Sign up

Export Citation Format

Share Document