scholarly journals Is it appropriate to use fixed assay cut-offs for estimating seroprevalence?

2015 ◽  
Vol 144 (4) ◽  
pp. 887-895 ◽  
Author(s):  
G. KAFATOS ◽  
N. J. ANDREWS ◽  
K. J. McCONWAY ◽  
P. A. C. MAPLE ◽  
K. BROWN ◽  
...  

SUMMARYPopulation seroprevalence can be estimated from serosurveys by classifying quantitative measurements into positives (past infection/vaccinated) or negatives (susceptible) according to a fixed assay cut-off. The choice of assay cut-offs has a direct impact on seroprevalence estimates. A time-resolved fluorescence immunoassay (TRFIA) was used to test exposure to human parvovirus 4 (HP4). Seroprevalence estimates were obtained after applying the diagnostic assay cut-off under different scenarios using simulations. Alternative methods for estimating assay cut-offs were proposed based on mixture modelling with component distributions for the past infection/vaccinated and susceptible populations. Seroprevalence estimates were compared to those obtained directly from the data using mixture models. Simulation results showed that when there was good distinction between the underlying populations all methods gave seroprevalence estimates close to the true one. For high overlap between the underlying components, the diagnostic assay cut-off generally gave the most biased estimates. However, the mixture model methods also gave biased estimates which were a result of poor model fit. In conclusion, fixed cut-offs often produce biased estimates but they also have advantages compared to other methods such as mixture models. The bias can be reduced by using assay cut-offs estimated specifically for seroprevalence studies.

2018 ◽  
Vol 9 (10) ◽  
pp. 2102-2114 ◽  
Author(s):  
Jonas Knape ◽  
Debora Arlt ◽  
Frédéric Barraquand ◽  
Åke Berg ◽  
Mathieu Chevalier ◽  
...  
Keyword(s):  

2019 ◽  
Vol 147 ◽  
Author(s):  
L. Léon ◽  
J. Pillonel ◽  
M. Jauffret-Roustide ◽  
F. Barin ◽  
Y. Le Strat

Abstract Seroprevalence estimation using cross-sectional serosurveys can be challenging due to inadequate or unknown biological cut-off limits of detection. In recent years, diagnostic assay cut-offs, fixed assay cut-offs and more flexible approaches as mixture modelling have been proposed to classify biological quantitative measurements into a positive or negative status. Our objective was to estimate the prevalence of anti-HCV antibodies among drug users (DU) in France in 2011 using a biological test performed on dried blood spots (DBS) collected during a cross-sectional serosurvey. However, in 2011, we did not have a cut-off value for DBS. We could not use the values for serum or plasma, knowing that the DBS value was not necessarily the same. Accordingly, we used a method which consisted of applying a two-component mixture model with age-dependent mixing proportions using penalised splines. The component densities were assumed to be log-normally distributed and were estimated in a Bayesian framework. Anti-HCV prevalence among DU was estimated at 43.3% in France and increased with age. Our method allowed us to provide estimates of age-dependent prevalence using DBS without having a specified biological cut-off value.


2019 ◽  
Vol 27 (4) ◽  
pp. 556-571 ◽  
Author(s):  
Laurence Brandenberger

Relational event models are becoming increasingly popular in modeling temporal dynamics of social networks. Due to their nature of combining survival analysis with network model terms, standard methods of assessing model fit are not suitable to determine if the models are specified sufficiently to prevent biased estimates. This paper tackles this problem by presenting a simple procedure for model-based simulations of relational events. Predictions are made based on survival probabilities and can be used to simulate new event sequences. Comparing these simulated event sequences to the original event sequence allows for in depth model comparisons (including parameter as well as model specifications) and testing of whether the model can replicate network characteristics sufficiently to allow for unbiased estimates.


Author(s):  
Cory A. Kramer ◽  
Reza Loloee ◽  
Indrek S. Wichman ◽  
Ruby N. Ghosh

The goal of this research is to obtain quantitative information on chemical speciation over time during high temperature material thermal decomposition. The long term goal of the research will be to impact structural fire safety by developing a data base of characteristic “burn signatures” for combustible structural materials. In order to establish procedure and to generate data for benchmark materials, the first material tested in these preliminary tests is poly-methyl-methacrylate (PMMA). Material samples are heated in an infrared (IR) heating chamber until they undergo pyrolysis. Time resolved quantitative measurements of the exhaust species CO2, O2, HC, and CO were obtained. During heating the PMMA sample undergoes two distinct processes. First, pre-combustion pyrolysis is characterized by the appearance a peak in the THC signal between 600–650 °C. Secondly, at about 900 °C flaming combustion occurs as evidenced by an exothermic reaction reported by the thermocouples. The time sequence of the production of HC, O2 depletion and CO2 production are consistent with combustion in an excess-oxidizer environment.


2013 ◽  
Vol 771 ◽  
pp. 7-13 ◽  
Author(s):  
Johan J. de Rooi ◽  
Olivier Devos ◽  
Michel Sliwa ◽  
Cyril Ruckebusch ◽  
Paul H.C. Eilers

1998 ◽  
Vol 9 ◽  
pp. 167-217 ◽  
Author(s):  
A. Ruiz ◽  
P. E. Lopez-de-Teruel ◽  
M. C. Garrido

This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications.


2017 ◽  
Author(s):  
Tianji Cai ◽  
Yiwei Xia ◽  
Yisu Zhou

Analysts of discrete data often face the challenge of managing the tendency of inflation on certain values. When treated improperly, such phenomenon may lead to biased estimates and incorrect inferences. This study extends the existing literature on single value inflated models, and develops a general framework to handle variables with more than one inflated values. To assess the performance of the proposed maximum likelihood estimator, we conducted Monte Carlo experiments under several scenarios for different levels of inflated probabilities under Multinomial, Ordinal, Poisson, and Zero-Truncated Poisson outcomes with covariates. We found that ignoring the inflations leads to substantial bias and poor inference if the inflations—not only for the intercept(s) of the inflated categories, but other coefficients as well. Specifically, higher values of inflated probabilities are associated with larger biases. By contrast, the Generalized Inflated Discrete models (GIDM) perform well with unbiased estimates and satisfactory coverages even when the number of parameters that need to be estimated is quite large. We showed that model fit criteria such as AIC could be used in selecting appropriate specification of inflated models. Lastly, GIDM was implemented to a large-scale health survey data to compare with conventional modeling approach such as various Poisson, and Ordered Logit models. We showed that GIDM fits the data better in general. The current work provides a practical approach to analyze multimodal data existing in many fields, such as heaping in self-reported behavioral outcomes, inflated categories of indifference and neutral in attitude survey, large amount of zero and low occurance of delinquent behaviors, etc.


1999 ◽  
Vol 60 (5) ◽  
pp. 5681-5684 ◽  
Author(s):  
S. Egelhaaf ◽  
U. Olsson ◽  
P. Schurtenberger ◽  
J. Morris ◽  
H. Wennerström

2021 ◽  
Vol 6 ◽  
Author(s):  
Kevin J. Grimm ◽  
Russell Houpt ◽  
Danielle Rodgers

One of the greatest challenges in the application of finite mixture models is model comparison. A variety of statistical fit indices exist, including information criteria, approximate likelihood ratio tests, and resampling techniques; however, none of these indices describe the amount of improvement in model fit when a latent class is added to the model. We review these model fit statistics and propose a novel approach, the likelihood increment percentage per parameter (LIPpp), targeting the relative improvement in model fit when a class is added to the model. Simulation work based on two previous simulation studies highlighted the potential for the LIPpp to identify the correct number of classes, and provide context for the magnitude of improvement in model fit. We conclude with recommendations and future research directions.


Sign in / Sign up

Export Citation Format

Share Document