test sensitivity
Recently Published Documents


TOTAL DOCUMENTS

366
(FIVE YEARS 113)

H-INDEX

35
(FIVE YEARS 6)

2021 ◽  
Vol 26 (50) ◽  
Author(s):  
Shelly Bolotin ◽  
Vanessa Tran ◽  
Shelley L Deeks ◽  
Adriana Peci ◽  
Kevin A Brown ◽  
...  

Background Serosurveys for SARS-CoV-2 aim to estimate the proportion of the population that has been infected. Aim This observational study assesses the seroprevalence of SARS-CoV-2 antibodies in Ontario, Canada during the first pandemic wave. Methods Using an orthogonal approach, we tested 8,902 residual specimens from the Public Health Ontario laboratory over three time periods during March–June 2020 and stratified results by age group, sex and region. We adjusted for antibody test sensitivity/specificity and compared with reported PCR-confirmed COVID-19 cases. Results Adjusted seroprevalence was 0.5% (95% confidence interval (CI): 0.1–1.5) from 27 March–30 April, 1.5% (95% CI: 0.7–2.2) from 26–31 May, and 1.1% (95% CI: 0.8–1.3) from 5–30 June 2020. Adjusted estimates were highest in individuals aged ≥ 60 years in March–April (1.3%; 95% CI: 0.2–4.6), in those aged 20–59 years in May (2.1%; 95% CI: 0.8–3.4) and in those aged ≥ 60 years in June (1.6%; 95% CI: 1.1–2.1). Regional seroprevalence varied, and was highest for Toronto in March–April (0.9%; 95% CI: 0.1–3.1), for Toronto in May (3.2%; 95% CI: 1.0–5.3) and for Toronto (1.5%; 95% CI: 0.9–2.1) and Central East in June (1.5%; 95% CI: 1.0–2.0). We estimate that COVID-19 cases detected by PCR in Ontario underestimated SARS-CoV-2 infections by a factor of 4.9. Conclusions Our results indicate low population seroprevalence in Ontario, suggesting that public health measures were effective at limiting the spread of SARS-CoV-2 during the first pandemic wave.


Author(s):  
Adireddi Paradesi Naidu ◽  
Chitralekha Saikumar ◽  
G. Sumathi ◽  
Kalavathy Victor ◽  
N. S. Muthiah

Background: The incidence of Dengue hemorrhagic fever, Dengue shock syndrome associated with Dengue can be reduced by diagnosing Dengue early and by initiating early treatment to Dengue patients. This study was conducted to compare results of NS1 antigen rapid test and ELISA in clinically suspected dengue patients. Materials and Methods: Present study was a comparative study conducted on 100 Patients presented with clinical history of Dengue. At Microbiology Laboratory, serum of all samples was assessed for NS1 detection using antigen Rapid test and ELISA.  Sensitivity & specificity values were calculated for NS1 antigen rapid test, compared with ELISA. Results: Out of 100 serum samples collected from suspected cases of Dengue in and around Anantapuramu, 30 (30%) were positive for ELISA and 28 (28%) were positive for Rapid diagnostic test. Sensitivity & specificity when only NS1 was considered on rapid test kits when compared with ELISA were 93.33%, 98.57%, Conclusion: It is concluded that ELISA test was superior in the diagnosis of Dengue and recommended on improvement in sensitivity of RDTs.


Author(s):  
Huanyu Wang ◽  
Sophonie Jean ◽  
Sarah A. Wilson ◽  
Jocelyn M. Lucyshyn ◽  
Sean McGrath ◽  
...  
Keyword(s):  
N Gene ◽  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jiali Yu ◽  
Yiduo Huang ◽  
Zuo-Jun Shen

AbstractPopulation screening played a substantial role in safely reopening the economy and avoiding new outbreaks of COVID-19. PCR-based pooled screening makes it possible to test the population with limited resources by pooling multiple individual samples. Our study compared different population-wide screening methods as transmission-mitigating interventions, including pooled PCR, individual PCR, and antigen screening. Incorporating testing-isolation process and individual-level viral load trajectories into an epidemic model, we further studied the impacts of testing-isolation on test sensitivities. Results show that the testing-isolation process could maintain a stable test sensitivity during the outbreak by removing most infected individuals, especially during the epidemic decline. Moreover, we compared the efficiency, accuracy, and cost of different screening methods during the pandemic. Our results show that PCR-based pooled screening is cost-effective in reversing the pandemic at low prevalence. When the prevalence is high, PCR-based pooled screening may not stop the outbreak. In contrast, antigen screening with sufficient frequency could reverse the epidemic, despite the high cost and the large numbers of false positives in the screening process.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S348-S349
Author(s):  
Meghan Linder ◽  
Sarah Humphrey-King ◽  
Rebecca Pierce ◽  
Sheri L Hearn ◽  
Melissa Sutton ◽  
...  

Abstract Background Long-term care facilities (LTCFs) are at high risk for severe COVID-19 outbreaks due to their congregate nature and vulnerable population. Oregon Health Authority (OHA) deployed point-of-care antigen (Ag) tests to promptly identify COVID-19 cases in LTCFs. However, their performance in identifying vaccine breakthrough cases has not been evaluated. Methods During 2/25/21–5/25/21, OHA supported testing of residents and staff for two outbreaks at a single LTCF. Paired nasal swabs were collected and tested for SARS-CoV-2 by CDC Influenza SARS-CoV-2 Multiplex PCR Assay (molecular test) and Abbott BinaxNOW COVID-19 Ag Card (Ag test) twice weekly during the outbreaks. Participants were considered fully vaccinated if ≥ 14 days had passed since completion of a vaccine series; all others were deemed unvaccinated. A vaccine breakthrough case was defined as a positive Ag or molecular test from a fully vaccinated person’s specimen. Performance characteristics of the Ag test were assessed, with molecular test as the reference standard. Cycle threshold (Ct) values were compared by one-sided independent t-tests. Results 94 unvaccinated residents and staff provided 563 paired samples; SARS-CoV-2 was detected in 21 (12 by Ag and molecular test, 6 by molecular test only, 3 by Ag test only), yielding Ag test sensitivity of 66.7% (95% CI: 43.8–83.7%) and specificity of 99.4% (95% CI: 98.4–99.8%). Mean Ct values were higher for specimens positive by PCR but negative by Ag than those positive by both (30.0 vs. 20.7, P < .01). 81 vaccinated persons provided 925 paired samples; SARS-CoV-2 was detected in 5 (1 by Ag and molecular test, 4 by molecular test only), yielding Ag test sensitivity of 20% (95% CI: 3.6–62.5%) and specificity of 100% (95% CI: 99.6–100%). Mean Ct values for specimens from vaccinated cases were higher than those from unvaccinated cases (30.2 vs. 23.8, P < .05). The lone Ag-positive breakthrough case had a Ct of 20; all others had Ct > 29. Conclusion Ag test performance and reduced sensitivity on specimens with high Ct values found in this population are consistent with published data. Molecular testing maximizes identification of vaccine breakthrough cases. More studies are needed to estimate the proportion of breakthrough cases missed by Ag testing and their risk of transmitting the virus in LTCFs. Disclosures All Authors: No reported disclosures


2021 ◽  
Vol 12 ◽  
Author(s):  
Valentin Parvu ◽  
Devin S. Gary ◽  
Joseph Mann ◽  
Yu-Chih Lin ◽  
Dorsey Mills ◽  
...  

Tests that detect the presence of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) antigen in clinical specimens from the upper respiratory tract can provide a rapid means of coronavirus disease 2019 (COVID-19) diagnosis and help identify individuals who may be infectious and should isolate to prevent SARS-CoV-2 transmission. This systematic review assesses the diagnostic accuracy of SARS-CoV-2 antigen detection in COVID-19 symptomatic and asymptomatic individuals compared to quantitative reverse transcription polymerase chain reaction (RT-qPCR) and summarizes antigen test sensitivity using meta-regression. In total, 83 studies were included that compared SARS-CoV-2 rapid antigen-based lateral flow testing (RALFT) to RT-qPCR for SARS-CoV-2. Generally, the quality of the evaluated studies was inconsistent; nevertheless, the overall sensitivity for RALFT was determined to be 75.0% (95% confidence interval: 71.0–78.0). Additionally, RALFT sensitivity was found to be higher for symptomatic vs. asymptomatic individuals and was higher for a symptomatic population within 7 days from symptom onset compared to a population with extended days of symptoms. Viral load was found to be the most important factor for determining SARS-CoV-2 antigen test sensitivity. Other design factors, such as specimen storage and anatomical collection type, also affect the performance of RALFT. RALFT and RT-qPCR testing both achieve high sensitivity when compared to SARS-CoV-2 viral culture.


2021 ◽  
Vol 2 ◽  
Author(s):  
Colby T. Ford ◽  
Gezahegn Solomon Alemayehu ◽  
Kayla Blackburn ◽  
Karen Lopez ◽  
Cheikh Cambel Dieng ◽  
...  

Malaria, predominantly caused by Plasmodium falciparum, poses one of largest and most durable health threats in the world. Previously, simplistic regression-based models have been created to characterize malaria rapid diagnostic test performance, though these models often only include a couple genetic factors. Specifically, the Baker et al., 2005 model uses two types of particular repeats in histidine-rich protein 2 (PfHRP2) to describe a P. falciparum infection, though the efficacy of this model has waned over recent years due to genetic mutations in the parasite. In this work, we use a dataset of 100 P. falciparum PfHRP2 genetic sequences collected in Ethiopia and derived a larger set of motif repeat matches for use in generating a series of diagnostic machine learning models. Here we show that the usage of additional and different motif repeats in more sophisticated machine learning methods proves effective in characterizing PfHRP2 diversity. Furthermore, we use machine learning model explainability methods to highlight which of the repeat types are most important with regards to rapid diagnostic test sensitivity, thereby showcasing a novel methodology for identifying potential targets for future versions of rapid diagnostic tests.


Author(s):  
Gregory A Kline ◽  
Jessica Boyd ◽  
Martin Hyrcza ◽  
Daniele Pacaud ◽  
Janice L Pasieka ◽  
...  

2021 ◽  
Vol 2 ◽  
Author(s):  
Heidi Albert ◽  
Benn Sartorius ◽  
Paul R. Bessell ◽  
Dziedzom K. de Souza ◽  
Sidharth Rupani ◽  
...  

BackgroundOnchocerciasis (river blindness) is a filarial disease targeted for elimination of transmission. However, challenges exist to the implementation of effective diagnostic and surveillance strategies at various stages of elimination programs. To address these challenges, we used a network data analytics approach to identify optimal diagnostic scenarios for onchocerciasis elimination mapping (OEM).MethodsThe diagnostic network optimization (DNO) method was used to model the implementation of the old Ov16 rapid diagnostic test (RDT) and of new RDTs in development for OEM under different testing strategy scenarios with varying testing locations, test performance and disease prevalence. Environmental suitability scores (ESS) based on machine learning algorithms were developed to identify areas at risk of transmission and used to select sites for OEM in Bandundu region in the Democratic Republic of Congo (DRC) and Uige province in Angola. Test sensitivity and specificity ranges were obtained from the literature for the existing RDT, and from characteristics defined in the target product profile for the new RDTs. Sourcing and transportation policies were defined, and costing information was obtained from onchocerciasis programs. Various scenarios were created to test various state configurations. The actual demand scenarios represented the disease prevalence at IUs according to the ESS, while the counterfactual scenarios (conducted only in the DRC) are based on adapted prevalence estimates to generate prevalence close to the statistical decision thresholds (5% and 2%), to account for variability in field observations. The number of correctly classified implementation units (IUs) per scenario were estimated and key cost drivers were identified.ResultsIn both Bandundu and Uige, the sites selected based on ESS had high predicted onchocerciasis prevalence >10%. Thus, in the actual demand scenarios in both Bandundu and Uige, the old Ov16 RDT correctly classified all 13 and 11 IUs, respectively, as requiring CDTi. In the counterfactual scenarios in Bandundu, the new RDTs with higher specificity correctly classified IUs more cost effectively. The new RDT with highest specificity (99.8%) correctly classified all 13 IUs. However, very high specificity (e.g., 99.8%) when coupled with imperfect sensitivity, can result in many false negative results (missing decisions to start MDA) at the 5% statistical decision threshold (the decision rule to start MDA). This effect can be negated by reducing the statistical decision threshold to 2%. Across all scenarios, the need for second stage sampling significantly drove program costs upwards. The best performing testing strategies with new RDTs were more expensive than testing with existing tests due to need for second stage sampling, but this was offset by the cost of incorrect classification of IUs.ConclusionThe new RDTs modelled added most value in areas with variable disease prevalence, with most benefit in IUs that are near the statistical decision thresholds. Based on the evaluations in this study, DNO could be used to guide the development of new RDTs based on defined sensitivities and specificities. While test sensitivity is a minor driver of whether an IU is identified as positive, higher specificities are essential. Further, these models could be used to explore the development and optimization of new tools for other neglected tropical diseases.


Sign in / Sign up

Export Citation Format

Share Document