scholarly journals Evaluating and optimizing the use of diagnostics during epidemics: Application to the 2017 plague outbreak in Madagascar

2021 ◽  
Author(s):  
Quirine Bosch ◽  
Voahangy Andrianaivoarimanana ◽  
Beza Ramasindrazana ◽  
Guillain Mikaty ◽  
Rado JL Rakotonanahary ◽  
...  

During outbreaks, the lack of diagnostic “gold standard” can mask the true burden of infection in the population and hamper the allocation of resources required for control. Here, we present an analytical framework to evaluate and optimize the use of diagnostics when multiple yet imperfect diagnostic tests are available. We apply it to laboratory results of 2,136 samples, analyzed with three diagnostic tests (based on up to seven diagnostic outcomes), collected during the 2017 pneumonic (PP) and bubonic plague (BP) outbreak in Madagascar, which was unprecedented both in the number of notified cases, clinical presentation, and spatial distribution. The extent of this outbreaks has however remained unclear due to non-optimal assays. Using latent class methods, we estimate that 7%-15% of notified cases were Yersinia pestis-infected. Overreporting was highest during the peak of the outbreak and lowest in the rural settings endemic to Yersinia pestis. Molecular biology methods offered the best compromise between sensitivity and specificity. The specificity of the rapid diagnostic test was relatively low (PP: 82%, BP: 85%), particularly for use in contexts with large quantities of misclassified cases. Comparison with data from a subsequent seasonal Yersinia pestis outbreak in 2018 reveal better test performance (BP: specificity 99%, sensitivity: 91%), indicating that factors related to the response to a large, explosive outbreak may well have affected test performance. We used our framework to optimize the case classification and derive consolidated epidemic trends. Our approach may help reduce uncertainties in other outbreaks where diagnostics are imperfect.

2013 ◽  
Vol 103 (12) ◽  
pp. 1243-1251 ◽  
Author(s):  
William W. Turechek ◽  
Craig G. Webster ◽  
Jingyi Duan ◽  
Pamela D. Roberts ◽  
Chandrasekar S. Kousik ◽  
...  

Squash vein yellowing virus (SqVYV) is the causal agent of viral watermelon vine decline, one of the most serious diseases in watermelon (Citrullus lanatus L.) production in the southeastern United States. At present, there is not a gold standard diagnostic test for determining the true status of SqVYV infection in plants. Current diagnostic methods for identification of SqVYV-infected plants or tissues are based on the reverse-transcription polymerase chain reaction (RT-PCR), tissue blot nucleic acid hybridization assays (TB), and expression of visual symptoms. A quantitative assessment of the performance of these diagnostic tests is lacking, which may lead to an incorrect interpretation of results. In this study, latent class analysis (LCA) was used to estimate the sensitivities and specificities of RT-PCR, TB, and visual assessment of symptoms as diagnostic tests for SqVYV. The LCA model assumes that the observed diagnostic test responses are linked to an underlying latent (nonobserved) disease status of the population, and can be used to estimate sensitivity and specificity of the individual tests, as well as to derive an estimate of the incidence of disease when a gold standard test does not exist. LCA can also be expanded to evaluate the effect of factors and was done here to determine whether diagnostic test performances varied among the type of plant tissue being tested (crown versus vine tissue), where plant samples were taken relative to the position of the crown (i.e., distance from the crown), host (i.e., genus), and habitat (field-grown versus greenhouse-grown plants). Results showed that RT-PCR had the highest sensitivity (0.94) and specificity (0.98) of the three tests. TB had better sensitivity than symptoms for detection of SqVYV infection (0.70 versus 0.32), while the visual assessment of symptoms was more specific than TB and, thus, a better indicator of noninfection (0.98 versus 0.65). With respect to the grouping variables, RT-PCR and TB had better sensitivity but poorer specificity for diagnosing SqVYV infection in crown tissue than it did in vine tissue, whereas symptoms had very poor sensitivity but excellent specificity in both tissues for all cucurbits analyzed in this study. Test performance also varied with habitat and genus but not with distance from the crown. The results given here provide quantitative measurements of test performance for a range of conditions and provide the information needed to interpret test results when tests are used in parallel or serial combination for a diagnosis.


2021 ◽  
Vol 52 (1) ◽  
Author(s):  
Jobin Thomas ◽  
Ana Balseiro ◽  
Christian Gortázar ◽  
María A. Risalde

AbstractAnimal tuberculosis (TB) is a multi-host disease caused by members of the Mycobacterium tuberculosis complex (MTC). Due to its impact on economy, sanitary standards of milk and meat industry, public health and conservation, TB control is an actively ongoing research subject. Several wildlife species are involved in the maintenance and transmission of TB, so that new approaches to wildlife TB diagnosis have gained relevance in recent years. Diagnosis is a paramount step for screening, epidemiological investigation, as well as for ensuring the success of control strategies such as vaccination trials. This is the first review that systematically addresses data available for the diagnosis of TB in wildlife following the Preferred Reporting Items of Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The article also gives an overview of the factors related to host, environment, sampling, and diagnostic techniques which can affect test performance. After three screenings, 124 articles were considered for systematic review. Literature indicates that post-mortem examination and culture are useful methods for disease surveillance, but immunological diagnostic tests based on cellular and humoral immune response detection are gaining importance in wildlife TB diagnosis. Among them, serological tests are especially useful in wildlife because they are relatively inexpensive and easy to perform, facilitate large-scale surveillance and can be used both ante- and post-mortem. Currently available studies assessed test performance mostly in cervids, European badgers, wild suids and wild bovids. Research to improve diagnostic tests for wildlife TB diagnosis is still needed in order to reach accurate, rapid and cost-effective diagnostic techniques adequate to a broad range of target species and consistent over space and time to allow proper disease monitoring.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A Tsiachristas ◽  
H West ◽  
E.K Oikonomou ◽  
B Mihaylova ◽  
N Sabharwall ◽  
...  

Abstract Background The National Institute for Health and Care Excellence (NICE) updated their guidance for the management of patients with stable chest pain and recommended that all patients undergo computed tomography coronary angiography (CTCA). This update has sparked a great deal of debate, and was followed by upgrade of CTCA into a Class I indication in the recent ESC guidelines. The cost-effectiveness of using CTCA as first line investigation is still unclear. Purpose To describe the current clinical pathway of patients with stable chest pain presented to outpatient clinics, assess the compliance with the updated NICE guideline, and explore the costs and health outcomes of different non-invasive diagnostic tests in real-world clinical setting. Methods We used data of 4,297 patients who attended chest pain clinics in Oxford between 1 January 2014 and 31 July 2018. Data included clinical presentation (e.g. age and previous cardiovascular conditions), diagnostic tests, outpatient visits, hospitalization, and hospital mortality and was compared between 6 alternative first-line diagnostic tests. Multinomial regressions were performed to estimate the probability of receiving each alternative and the associated cost after adjusting for clinical presentation. A decision tree was developed to describe the clinical pathway for each alternative first-line diagnostic in terms of subsequent diagnostic tests and treatments and to estimate the associated costs and life days. Results The proportion of patients who received CTCA as first line diagnostic test increased from 1% in 2014 to 17% in 2018, while the publication of the updated NICE guidelines in 2016 led to a threefold increase in this proportion. CTCA is less likely to be provided as a first-line diagnostic to patients who are younger age, males, smokers, and have angina, PVD, or diabetes. The standardised rate of hospital admission was the lowest in the exercise ECG cohort (0.35 admissions per 1,000 life-days) followed by the CTCA cohort (0.40 admissions per 1,000 life-days) while the latter cohort had the lowest standardised rate of cardiovascular treatment (2.74% per 1,000 life days). Stress echocardiography and MPS were associated with higher costs compared with CTCA, other ECG, and exercise ECG after adjusting for clinical presentation and days of follow-up. CTCA is the pathway most likely to be cost-effective, even compared to exercise ECG, while the other diagnostic alternatives are dominated (i.e. they cost more for less life-days). Conclusions Currently, the updated NICE guidelines for stable chest pain are implemented only to a fifth of the cases in England. Our findings support existing evidence that CTCA is the most-cost effective first-line diagnostic test for this population. Hopefully, this will inform the debate around the implementation of the guidelines and help commissioning and clinical decision processes worldwide. Funding Acknowledgement Type of funding source: Public grant(s) – National budget only. Main funding source(s): National Institute of Health Research Oxford Biomedical Research Centre


2012 ◽  
Vol 1 (1) ◽  
Author(s):  
Aaron W. Tustin ◽  
Dylan S. Small ◽  
Stephen Delgado ◽  
Ricardo Castillo Neyra ◽  
Manuela R. Verastegui ◽  
...  

2002 ◽  
Vol 126 (1) ◽  
pp. 19-27
Author(s):  
Dana Marie Grzybicki ◽  
Thomas Gross ◽  
Kim R. Geisinger ◽  
Stephen S. Raab

Abstract Context.—Measuring variation in clinician test ordering behavior for patients with similar indications is an important focus for quality management and cost containment. Objective.—To obtain information from physicians and nonphysicians regarding their test-ordering behavior and their knowledge of test performance characteristics for diagnostic tests used to work up patients with lung lesions suspicious for cancer. Design.—A self-administered, voluntary, anonymous questionnaire was distributed to 452 multiple-specialty physicians and 500 nonphysicians in academic and private practice in Pennsylvania, Iowa, and North Carolina. Respondents indicated their estimates of test sensitivities for multiple tests used in the diagnosis of lung lesions and provided their test selection strategy for case simulations of patients with solitary lung lesions. Data were analyzed using descriptive statistics and the χ2 test. Results.—The response rate was 11.2%. Both physicians and nonphysicians tended to underestimate the sensitivities of all minimally invasive tests, with the greatest underestimations reported for sputum cytology and transthoracic fine-needle aspiration biopsy. There was marked variation in sequential test selection for all the case simulations and no association between respondent perception of test sensitivity and their selection of first diagnostic test. Overall, the most frequently chosen first diagnostic test was bronchoscopy. Conclusions.—Physicians and nonphysicians tend to underestimate the performance of diagnostic tests used to evaluate solitary lung lesions. However, their misperceptions do not appear to explain the wide variation in test-ordering behavior for patients with lung lesions suspicious for cancer.


2017 ◽  
Vol 117 (04) ◽  
pp. 809-815 ◽  
Author(s):  
Suzanne Bleker ◽  
Barbara Hutten ◽  
Anne Timmermans ◽  
Harry Büller ◽  
Saskia Middeldorp ◽  
...  

SummaryAbnormal vaginal bleeding can complicate direct oral anticoagulant (DOAC) treatment. We aimed to investigate the characteristics of abnormal vaginal bleeding in patients with venous thromboembolism (VTE) receiving apixaban or enoxaparin/warfarin. Data were derived from the AMPLIFY trial. We compared the incidence of abnormal vaginal bleeding between patients in both treatment arms and collected information on clinical presentation, diagnostic procedures, management and outcomes. In the AMPLIFY trial, 1122 women were treated with apixaban and 1106 received enoxaparin/warfarin. A clinically relevant non-major (CRNM) vaginal bleeding occurred in 28 (2.5 %) apixaban and 24 (2.1 %) enoxaparin/warfarin recipients (odds ratio [OR] 1.2, 95 % confidence interval [CI] 0.7–2.0). Of all CRNM bleeds, 28 of 62 (45 %) and 24 of 120 (20 %) were of vaginal origin in the apixaban and enoxaparin/warfarin group, respectively (OR 3.4; 95 % CI 1.8–6.7). Premenopausal vaginal bleeds on apixaban were characterised by more prolonged bleeding (OR 2.3; 95 %CI 0.5–11). In both pre- and postmenopausal vaginal bleeds, diagnostic tests were performed in six (21 %) and in seven (29 %) apixaban and enoxaparin/ warfarin treated patients, respectively. Medical treatment was deemed not necessary in 16 (57 %) apixaban and 16 (67 %) enoxaparin/warfarin recipients. The severity of clinical presentation and course of the bleeds was mild in 75 % of the cases in both groups. In conclusion, although the absolute number of vaginal bleeding events is comparable between apixaban and enoxaparin/warfarin recipients, the relative occurrence of vaginal bleeds is higher in apixaban-treated women. The characteristics and severity of bleeding episodes were comparable in both treatment arms.


Author(s):  
Karen M. Jennings ◽  
Lindsay P. Bodell ◽  
Ross D. Crosby ◽  
Ann F. Haynos ◽  
Jennifer E. Wildes

BACKGROUND: Efforts to examine alternative classifications (e.g., personality) of anorexia nervosa (AN) using empirical techniques are crucial to elucidate diverse symptom presentations, personality traits, and psychiatric comorbidities. AIM: The purpose of this study was to use an empirical approach (mixture modeling) to test an alternative classification of AN as categorical, dimensional, or hybrid categorical–dimensional construct based on the co-occurrence of personality psychopathology and eating disorder clinical presentation. METHOD: Patients with AN ( N = 194) completed interviews and questionnaires at treatment admission and 3-month follow-up. Mixture modeling was used to test whether indicators best classified AN as categorical, dimensional, or hybrid. RESULTS: A four-latent class, one-latent dimension mixture model that was variant across groups provided the best fit to the data. Results suggest that all classes were characterized by low self-esteem and self-harming and suicidality tendencies. Individuals assigned to Latent Class 2 (LC2; n = 21) had a greater tendency toward being impulsive and easily angered and having difficulties controlling anger compared with those in LC1 ( n = 84) and LC3 ( n = 66). Moreover, individuals assigned to LC1 and LC3 were more likely to have a poor outcome from intensive treatment compared with those in LC4 ( n = 21). Findings indicate that the dimensional aspect within each class measured frequency of specific eating disorder behaviors but did not predict treatment outcomes. CONCLUSION: These results emphasize the complexity of AN and the importance of considering how facets of clinical presentation beyond eating disorder behaviors may have different treatment and prognostic implications.


2019 ◽  
Vol 125 ◽  
pp. 14-23
Author(s):  
Paulo Martins Soares Filho ◽  
Alberto Knust Ramalho ◽  
André de Moura Silva ◽  
Mikael Arrais Hodon ◽  
Marina de Azevedo Issa ◽  
...  

2017 ◽  
Vol 11 (4) ◽  
pp. 345
Author(s):  
Tiziana M. Attardo ◽  
Elena Magnani ◽  
Carlotta Casati ◽  
Danilo Cavalieri ◽  
Pietro Crispino ◽  
...  

Celiac disease (CD) is a complex polygenic disorder, which involves genetic factors human leukocyte complex (HLA) and non-HLA genes, environmental factors, innate and adoptive immunity, and a robust chronic T-mediated autoimmune component. The main goal of the present monograph is to define a methodological approach for the disease, characterized by frequent late diagnosis, in order for the physician to become aware of the disease management, the diversity of the clinical presentation itself and in different patients. A unique attention is payed to the specific diagnostic tests to define a correct and accurate application of them, and in addition, to disease follow-up and possible complications. Moreover, a dedicated space is assigned to refractory CD, to potential CD and non-celiac gluten sensitivity. Legislative aspects of the celiac disease in Italy are addressed, too. The celiac disease guidelines and their evaluation by means of Appraisal of Guidelines, Research and Evaluation II instrument allow us to classify the different recommendations and to apply them according to the stakeholders’ involvement, pertinence, methodological accuracy, clarity and publishing independence. Finally, the most current scientific evidence is taken into account to create a complete updated monograph.


Sign in / Sign up

Export Citation Format

Share Document