scholarly journals A preliminary study on analytical performance of serological assay for SARS-CoV-2 IgM/IgG and application in clinical practice

Author(s):  
Quan Zhou ◽  
Danping Zhu ◽  
Huacheng Yan ◽  
Jingwen Quan ◽  
Zhenzhan Kuang ◽  
...  

AbstractObjectiveTo investigate the performance of serological test and dynamics of serum antibody with the progress of SARS-CoV-2 infections.MethodsA total of 419 patients were enrolled including 19 confirmed cases and 400 patients from fever clinics. Their serial serum samples collected during the hospitalization were menstruated for IgM and IgG against SARS-CoV-2 using gold immunochromatographic assay and chemiluminescence immunoassay. We investigated whether thermal inactivation could affect the results of antibody detection. The dynamics of antibodies with the disease progress and false positive factors for antibody testing were also analyzed.ResultsThe positive rate of IgG detection was 91.67% and 83.33% using two CLIA, respectively. However, the IgM positive rate was dramatically declined might due to the lack of blood samples at early stages of the disease. The chemiluminescence immunoassay had a favorable but narrow linear range. Our work showed increased IgG values in serums from virus-negative patients and four negative samples were IgG weak-positive after thermal incubation. Our data showed the specificity of viral N+S proteins was higher than single antigen. Unlike generally thought that IgM appeared earlier than IgG, there is no certain chronological order of IgM and IgG seroconversion in COVID-19 patients. It was difficult to detect antibodies in asymptomatic patients suggesting that their low viral loads were not enough to cause immune response. Analysis of common interferent in three IgG false-positive patients, such as rheumatoid factor, proved that false positives were not caused by these interfering substances and antigenic cross-reaction.ConclusionsViral serological test is an effective means for SARS-CoV-2 infect detection using both chemiluminescence immunoassay and gold immunochromatographic assay. Chemiluminescence immunoassay against multi-antigens has obvious advantages but still need improve in reducing false positives.

2014 ◽  
Vol 644-650 ◽  
pp. 3338-3341 ◽  
Author(s):  
Guang Feng Guo

During the 30-year development of the Intrusion Detection System, the problems such as the high false-positive rate have always plagued the users. Therefore, the ontology and context verification based intrusion detection model (OCVIDM) was put forward to connect the description of attack’s signatures and context effectively. The OCVIDM established the knowledge base of the intrusion detection ontology that was regarded as the center of efficient filtering platform of the false alerts to realize the automatic validation of the alarm and self-acting judgment of the real attacks, so as to achieve the goal of filtering the non-relevant positives alerts and reduce false positives.


Author(s):  
Yafang Wan ◽  
Zhijie Li ◽  
Kun Wang ◽  
Tian Li ◽  
Pu Liao

Objectives The purpose of the current study was to evaluate the analytical performance of seven kits for detecting IgM/IgG antibodies against coronavirus (SARS-CoV-2) by using four chemiluminescence immunoassay systems. Methods Fifty patients diagnosed with SARS-CoV-2 infection and 130 controls without coronavirus infection from the General Hospital of Chongqing were enrolled in the current retrospective study. Four chemiluminescence immunoassay systems, including seven IgM/IgG antibody detection kits for SARS-CoV-2 (A_IgM, A_IgG, B_IgM, B_IgG, C_IgM, C_IgG and D_Ab), were employed to detect antibody concentrations. The chi-square test, the receiver operating characteristic (ROC) curve and Youden’s index were determined to verify the cut-off value of each detection system. Results The repeatability verification results of the A, B, C and D systems are all qualified. D_Ab performed best (92% sensitivity and 99.23% specificity), and B_IgM performed worse than the other systems. Except for the A_IgM and C_IgG systems, the optimal diagnostic thresholds and cut-off values of the other kits and their recommendations are inconsistent with each other. B_IgM had the worst AUC, and C_IgG had the best diagnostic accuracy. More importantly, the B_IgG system had the highest false-positive rate for testing patients with AIDS, tumours and pregnancies. The A_IgM system test showed the highest false-positive rates among elderly individuals over 90 years old. COVID-2019 IgM/IgG antibody test systems exhibit performance differences. Conclusions The Innodx Biotech Total Antibody serum diagnosis kit is the most reliable detection system for anti-SARS-CoV-2 antibodies, which can be used together with nucleic acid tests as an alternative method for SARS-CoV-2 detecting.


2020 ◽  
Vol 30 (12) ◽  
pp. 1851-1855
Author(s):  
Sruti Rao ◽  
M. B. Goens ◽  
Orrin B. Myers ◽  
Emilie A. Sebesta

AbstractAim:To determine the false-positive rate of pulse oximetry screening at moderate altitude, presumed to be elevated compared with sea level values and assess change in false-positive rate with time.Methods:We retrospectively analysed 3548 infants in the newborn nursery in Albuquerque, New Mexico, (elevation 5400 ft) from July 2012 to October 2013. Universal pulse oximetry screening guidelines were employed after 24 hours of life but before discharge. Newborn babies between 36 and 36 6/7 weeks of gestation, weighing >2 kg and babies >37 weeks weighing >1.7 kg were included in the study. Log-binomial regression was used to assess change in the probability of false positives over time.Results:Of the 3548 patients analysed, there was one true positive with a posteriorly-malaligned ventricular septal defect and an interrupted aortic arch. Of the 93 false positives, the mean pre- and post-ductal saturations were lower, 92 and 90%, respectively. The false-positive rate before April 2013 was 3.5% and after April 2013, decreased to 1.5%. There was a significant decrease in false-positive rate (p = 0.003, slope coefficient = −0.082, standard error of coefficient = 0.023) with the relative risk of a false positive decreasing at 0.92 (95% CI 0.88–0.97) per month.Conclusion:This is the first study in Albuquerque, New Mexico, reporting a high false-positive rate of 1.5% at moderate altitude at the end of the study in comparison to the false-positive rate of 0.035% at sea level. Implementation of the nationally recommended universal pulse oximetry screening was associated with a high false-positive rate in the initial period, thought to be from the combination of both learning curve and altitude. After the initial decline, it remained steadily elevated above sea level, indicating the dominant effect of moderate altitude.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Ginette Lafit ◽  
Francis Tuerlinckx ◽  
Inez Myin-Germeys ◽  
Eva Ceulemans

AbstractGaussian Graphical Models (GGMs) are extensively used in many research areas, such as genomics, proteomics, neuroimaging, and psychology, to study the partial correlation structure of a set of variables. This structure is visualized by drawing an undirected network, in which the variables constitute the nodes and the partial correlations the edges. In many applications, it makes sense to impose sparsity (i.e., some of the partial correlations are forced to zero) as sparsity is theoretically meaningful and/or because it improves the predictive accuracy of the fitted model. However, as we will show by means of extensive simulations, state-of-the-art estimation approaches for imposing sparsity on GGMs, such as the Graphical lasso, ℓ1 regularized nodewise regression, and joint sparse regression, fall short because they often yield too many false positives (i.e., partial correlations that are not properly set to zero). In this paper we present a new estimation approach that allows to control the false positive rate better. Our approach consists of two steps: First, we estimate an undirected network using one of the three state-of-the-art estimation approaches. Second, we try to detect the false positives, by flagging the partial correlations that are smaller in absolute value than a given threshold, which is determined through cross-validation; the flagged correlations are set to zero. Applying this new approach to the same simulated data, shows that it indeed performs better. We also illustrate our approach by using it to estimate (1) a gene regulatory network for breast cancer data, (2) a symptom network of patients with a diagnosis within the nonaffective psychotic spectrum and (3) a symptom network of patients with PTSD.


1981 ◽  
Vol 74 (1) ◽  
pp. 41-43 ◽  
Author(s):  
I G Barrison ◽  
E R Littlewood ◽  
J Primavesi ◽  
A Sharpies ◽  
I T Gilmore ◽  
...  

Stools have been tested for occult gastrointestinal bleeding in 278 outpatients and 170 hospital inpatients using the Haemoccult and Haemastix methods. Seventeen outpatients (6.1%) and 42 inpatients (24.7%) were positive with the Haemoccult technique. Thirty-three outpatients (11.9%) and 93 inpatients (54.7%) were positive with the Haemastix test. Following investigation of the Haemoccult-positive patients, only 2 cases (3.4%) were considered false positives. However, the false positive rate with Haemastix was 22.9% which is unacceptable in a screening test. Haemoccult may be useful as a screening test for asymptomatic general practice patients, but a test of greater sensitivity is needed for hospital patients.


2018 ◽  
pp. 1-10
Author(s):  
Luke T. Lavallée ◽  
Rodney H. Breau ◽  
Dean Fergusson ◽  
Cynthia Walsh ◽  
Carl van Walraven

Purpose Administrative health data can be a valuable resource for health research. Because these data are not collected for research purposes, it is imperative that the accuracy of codes used to identify patients, exposures, and outcomes is measured. Patients and Methods Code sensitivity was determined by identifying a cohort of men with histologically confirmed prostate cancer in the Ontario Cancer Registry and linking them to the Ontario Health Insurance Plan (OHIP) to determine whether a prostate biopsy code had been claimed. Code specificity was estimated using a random sample of patients at The Ottawa Hospital for whom a prostate biopsy code was submitted to OHIP. A simulation model, which varied the code false-positive rate, true-negative rate, and proportion of code positives in the population, was created to determine specificity under a range of combinations of these parameters. Results Between 1991 and 2012, 97,369 of 148,669 men with histologically confirmed prostate cancer in the Ontario Cancer Registry had a prostate biopsy code in OHIP within 1 week of their diagnosis (code sensitivity, 86.0%). This increased significantly over time (63.8% in 1991 to 87.9% in 2012). The false-positive rate of the code for index prostate biopsies was 1.9%. The simulation model found that the code specificity exceeded 95% for first prostate biopsy but was lower for secondary biopsies because of more false positives. False positives primarily were related to placement of fiducial markers for patients who received radiotherapy. Conclusion Administrative data in Ontario can accurately identify men who receive a prostate biopsy. The code is less accurate for secondary biopsy procedures and their sequelae.


2014 ◽  
Vol 687-691 ◽  
pp. 2611-2617
Author(s):  
Hong Hai Zhou ◽  
Pei Bin Liu ◽  
Zhi Hao Jin

In this paper, a new method which is named DRNFD for network troubleshooting is brought forward in which “abnormal degree” is defined by the vector of probability and belief functions in a privileged process. A new formula based on Dempster Rule is presented to decrease false positives. This method (DRNFD) can effectively reduce false positive rate and non-response rate and can be applied to real-time fault diagnosis. The operational prototypical system demonstrates its feasibility and gets the effectiveness of real-time fault diagnosis.


Author(s):  
Shangxin Yang ◽  
Nicholas Stanzione ◽  
Daniel Z Uslan ◽  
Omai B Garner ◽  
Annabelle de St Maurice

Abstract Objectives The inconclusive severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) polymerase chain reaction (PCR) result causes confusion and delay for infection prevention precautions and patient management. We aimed to develop a quantitative algorithm to assess and interpret these inconclusive results. Methods We created a score-based algorithm by combining laboratory, clinical, and epidemiologic data to evaluate 69 cases with inconclusive coronavirus disease 2019 (COVID-19) PCR results from the Centers for Disease Control and Prevention (CDC) assay (18 cases) and the TaqPath assay (51 cases). Results We determined 5 (28%) of 18 (CDC assay) and 20 (39%) of 51 (TaqPath assay) cases to be false positive. Lowering the cycle threshold cutoff from 40 to 37 in the TaqPath assay resulted in a dramatic reduction of the false-positive rate to 14%. We also showed testing of asymptomatic individuals is associated with a significantly higher probability of having a false-positive result. Conclusions A substantial percentage of inconclusive SARS-CoV-2 PCR results can be false positive, especially among asymptomatic patients. The quantitative algorithm we created was shown to be effective and could provide a useful tool for clinicians and hospital epidemiologists to interpret inconclusive COVID-19 PCR results and provide clinical guidance when additional PCR or antibody test results are available.


Author(s):  
Pamela Reinagel

AbstractAfter an experiment has been completed and analyzed, a trend may be observed that is “not quite significant”. Sometimes in this situation, researchers incrementally grow their sample size N in an effort to achieve statistical significance. This is especially tempting in situations when samples are very costly or time-consuming to collect, such that collecting an entirely new sample larger than N (the statistically sanctioned alternative) would be prohibitive. Such post-hoc sampling or “N-hacking” is condemned, however, because it leads to an excess of false positive results. Here Monte-Carlo simulations are used to show why and how incremental sampling causes false positives, but also to challenge the claim that it necessarily produces alarmingly high false positive rates. In a parameter regime that would be representative of practice in many research fields, simulations show that the inflation of the false positive rate is modest and easily bounded. But the effect on false positive rate is only half the story. What many researchers really want to know is the effect N-hacking would have on the likelihood that a positive result is a real effect that will be replicable: the positive predictive value (PPV). This question has not been considered in the reproducibility literature. The answer depends on the effect size and the prior probability of an effect. Although in practice these values are not known, simulations show that for a wide range of values, the PPV of results obtained by N-hacking is in fact higher than that of non-incremented experiments of the same sample size and statistical power. This is because the increase in false positives is more than offset by the increase in true positives. Therefore in many situations, adding a few samples to shore up a nearly-significant result is in fact statistically beneficial. In conclusion, if samples are added after an initial hypothesis test this should be disclosed, and if a p value is reported it should be corrected. But, contrary to widespread belief, collecting additional samples to resolve a borderline p value is not invalid, and can confer previously unappreciated advantages for efficiency and positive predictive value.


2015 ◽  
Author(s):  
David M Rocke ◽  
Luyao Ruan ◽  
Yilun Zhang ◽  
J. Jared Gossett ◽  
Blythe Durbin-Johnson ◽  
...  

Motivation: An important property of a valid method for testing for differential expression is that the false positive rate should at least roughly correspond to the p-value cutoff, so that if 10,000 genes are tested at a p-value cutoff of 10−4, and if all the null hypotheses are true, then there should be only about 1 gene declared to be significantly differentially expressed. We tested this by resampling from existing RNA-Seq data sets and also by matched negative binomial simulations. Results: Methods we examined, which rely strongly on a negative binomial model, such as edgeR, DESeq, and DESeq2, show large numbers of false positives in both the resampled real-data case and in the simulated negative binomial case. This also occurs with a negative binomial generalized linear model function in R. Methods that use only the variance function, such as limma-voom, do not show excessive false positives, as is also the case with a variance stabilizing transformation followed by linear model analysis with limma. The excess false positives are likely caused by apparently small biases in estimation of negative binomial dispersion and, perhaps surprisingly, occur mostly when the mean and/or the dis-persion is high, rather than for low-count genes.


Sign in / Sign up

Export Citation Format

Share Document