scholarly journals Prevalence and genetic characterization of Cryptosporidium in pre-weaned cattle in Urmia (Northwestern Iran)

2021 ◽  
Vol 15 (03) ◽  
pp. 422-427
Author(s):  
Mahmoud Mahmoudi ◽  
Khosrow Hazrati Tapeh ◽  
Esmaeil Abasi ◽  
Hojjat Sayyadi ◽  
Arash Aminpour

Introduction: Cryptosporidiosis is a zoonotic disease causing digestive problems in pre-weaned calves. Considering the zoonosis of the parasite and its importance in veterinary medicine, we evaluated the prevalence and genotyping of Cryptosporidium spp. in diarrheic pre-weaned calves in the northwest of Iran. Methodology: A total of 100 stool samples of the infant calves with diarrhea were collected from industrial and conventional livestock farms in Urmia City. All the samples were tested with acid-fast staining, ELISA, and PCR. Positive samples of the PCR method were sequenced to determine the Cryptosporidium species. The obtained results were compared for the mentioned methods based on statistical factors, sensitivity, specificity, positive and negative predictive values, as well as duration of the experiment and the costs of testing. Results: The results of this study showed that the prevalence of Cryptosporidium spp. in diarrheic infant calves in Urmia city was 5%, and C. parvum species of Cryptosporidium was detected in all the sequenced samples. According to the findings of the current study, the most appropriate method for the detection of the parasite is the ELISA that has a higher sensitivity and predictive value than acid-fast staining method and should be used in veterinary laboratories. Conclusions: In the current investigation, C. parvum was identified as the only infectious agent in the region and could be the main cause of human infection. More studies are needed to find the source of infection for establishing the control measures.

2019 ◽  
Vol 13 (10) ◽  
pp. 914-919
Author(s):  
Özlem Kirişci ◽  
Ahmet Calıskan

Introduction: In the diagnosis of hepatitis C virus (HCV) infection, the first step is screening for anti-HCV antibodies, and positive results are generally confirmed with nucleic acid amplification tests. Recent studies have reported that more compatible results have been obtained with the HCV RNA test using signal to cut-off (S/Co) values >1, which are the routine reactivity threshold for the anti-HCV enzyme immunoassay (EIA) test. The aim of this study was to determine the most appropriate S/Co value for the anti-HCV test, predicting HCV infection. Methodology: Comparisons were made between results of 559 patients who underwent anti-HCV with ECLIA method and HCV RNA tests with real-time polymerase chain reaction (PCR) method. By accepting the HCV-RNA test as the gold standard for HCV infection, the sensitivity, specificity and predictive values of the ECLIA test were determined and statistical “receiver operating characteristic” (ROC) analysis was applied to determine the most appropriate threshold. Results: Between January 2013 and April 2018, a total of 81,203 serum samples were examined. Of 559 anti-HCV positive patients, HCV RNA positivity was determined in 214 (38.2 %). According to the ROC analysis results, the most appropriate S/Co value was determined as 12.27, at which sensitivity was 94.4 %, and specificity 97.4 %. The positive and negative predictive values were calculated at the high rate of 95.7% and 96.6% respectively. Conclusions: The results of this study investigating the anti-HCV reactivity values which could be used in the diagnosis of HCV infection determined the most appropriate value to be 12.27.


2005 ◽  
Vol 134 (2) ◽  
pp. 421-423 ◽  
Author(s):  
O. O. EJIDOKUN ◽  
A. WALSH ◽  
J. BARNETT ◽  
Y. HOPE ◽  
S. ELLIS ◽  
...  

Vero cytotoxin-producing Escherichia coli O157 (VTEC O157) infections are a threat to public health. VTEC O157 has been isolated from gulls but evidence of transmission to humans from birds has not been reported. We recount an incident of VTEC O157 infection affecting two sibling children who had no direct contact with farm animals. An outbreak control team was convened to investigate the source of infection, its likely mode of transmission, and to advise on control measures. Human and veterinary samples were examined and the human isolates were found to be identical to an isolate from a sample of bird (rook) faeces. Cattle, rabbit and environmental samples were negative. This report provides evidence that birds may act as intermediaries for human infection with VTEC O157.


2019 ◽  
Vol 58 (3) ◽  
Author(s):  
John P. Bomkamp ◽  
Rand Sulaiman ◽  
Jennifer L. Hartwell ◽  
Armisha Desai ◽  
Vera C. Winn ◽  
...  

ABSTRACT This study was conducted to assess the utility of the T2Candida panel across an academic health center and identify potential areas for diagnostic optimization. A retrospective chart review was conducted on patients with a T2Candida panel and mycolytic/fungal (myco/f lytic) blood culture collected simultaneously during hospitalizations from February 2017 to March 2018. The primary outcome of this study was to determine the sensitivity, specificity, and positive and negative predictive values of the panel compared to myco/f lytic blood culture. Secondary outcomes included Candida species isolated from culture or detected on the panel, source of infection, days of therapy (DOT) of antifungals in patients with discordant results, and overall antifungal DOT/1,000 patient days. A total of 433 paired T2Candida panel and myco/f lytic blood cultures were identified. The pretest likelihood of candidemia was 4.4%. The sensitivity and specificity were 64.7% and 95.6%, respectively. The positive and negative predictive values were 40.7% and 98.5%, respectively. There were 16 patients with T2Candida panel positive and myco/f lytic blood culture negative results, while 6 patients had T2Candida panel negative and myco/f blood culture positive results. The overall antifungal DOT/1,000 patient days was improved after implementation of the T2Candida panel; however, the use of micafungin continued to decline after the panel was removed. We found that the T2Candida panel is a highly specific diagnostic tool; however, the sensitivity and positive predictive value may be lower than previously reported when employed in clinical practice. Clinicians should use this panel as an adjunct to blood cultures when making a definitive diagnosis of candidemia.


2020 ◽  
Vol 163 (6) ◽  
pp. 1156-1165
Author(s):  
Juan Xiao ◽  
Qiang Xiao ◽  
Wei Cong ◽  
Ting Li ◽  
Shouluan Ding ◽  
...  

Objective To develop an easy-to-use nomogram for discrimination of malignant thyroid nodules and to compare diagnostic efficiency with the Kwak and American College of Radiology (ACR) Thyroid Imaging, Reporting and Data System (TI-RADS). Study Design Retrospective diagnostic study. Setting The Second Hospital of Shandong University. Subjects and Methods From March 2017 to April 2019, 792 patients with 1940 thyroid nodules were included into the training set; from May 2019 to December 2019, 174 patients with 389 nodules were included into the validation set. Multivariable logistic regression model was used to develop a nomogram for discriminating malignant nodules. To compare the diagnostic performance of the nomogram with the Kwak and ACR TI-RADS, the area under the receiver operating characteristic curve, sensitivity, specificity, and positive and negative predictive values were calculated. Results The nomogram consisted of 7 factors: composition, orientation, echogenicity, border, margin, extrathyroidal extension, and calcification. In the training set, for all nodules, the area under the curve (AUC) for the nomogram was 0.844, which was higher than the Kwak TI-RADS (0.826, P = .008) and the ACR TI-RADS (0.810, P < .001). For the 822 nodules >1 cm, the AUC of the nomogram was 0.891, which was higher than the Kwak TI-RADS (0.852, P < .001) and the ACR TI-RADS (0.853, P < .001). In the validation set, the AUC of the nomogram was also higher than the Kwak and ACR TI-RADS ( P < .05), each in the whole series and separately for nodules >1 or ≤1 cm. Conclusions When compared with the Kwak and ACR TI-RADS, the nomogram had a better performance in discriminating malignant thyroid nodules.


Author(s):  
Carmelo Saraniti ◽  
Enzo Chianetta ◽  
Giuseppe Greco ◽  
Norhafiza Mat Lazim ◽  
Barbara Verro

Introduction Narrow-band imaging is an endoscopic diagnostic tool that, focusing on superficial vascular changes, is useful to detect suspicious laryngeal lesions, enabling their complete excision with safe and tailored resection margins. Objectives To analyze the applications and benefits of narrow-band imaging in detecting premalignant and malignant laryngeal lesions through a comparison with white-light endoscopy. Data Synthesis A literature search was performed in the PubMed, Scopus and Web of Science databases using strict keywords. Then, two authors independently analyzed the articles, read the titles and abstracts, and read completely only the relevant studies according to certain eligibility criteria. In total, 14 articles have been included in the present review; the sensitivity, specificity, positive and negative predictive values, and accuracy of pre- and/or intraoperative narrow-band imaging were analyzed. The analysis showed that narrow-band imaging is better than white-light endoscopy in terms of sensitivity, specificity, positive and negative predictive values, and accuracy regarding the ability to identify cancer and/or precancerous laryngeal lesions. Moreover, the intraoperative performance of narrow-band imaging resulted more effective than the in-office performance. Conclusion Narrow-band imaging is an effective diagnostic tool to detect premalignant and malignant laryngeal lesions and to define proper resection margins. Moreover, narrow-band imaging is useful in cases of leukoplakia that may cover a possible malignant lesion and that cannot be easily assessed with white-light endoscopy. Finally, a shared, simple and practical classification of laryngeal lesions, such as that of the European Laryngological Society, is required to identify a shared lesion management strategy. Key Points


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nita Vangeepuram ◽  
Bian Liu ◽  
Po-hsiang Chiu ◽  
Linhua Wang ◽  
Gaurav Pandey

AbstractPrediabetes and diabetes mellitus (preDM/DM) have become alarmingly prevalent among youth in recent years. However, simple questionnaire-based screening tools to reliably assess diabetes risk are only available for adults, not youth. As a first step in developing such a tool, we used a large-scale dataset from the National Health and Nutritional Examination Survey (NHANES) to examine the performance of a published pediatric clinical screening guideline in identifying youth with preDM/DM based on American Diabetes Association diagnostic biomarkers. We assessed the agreement between the clinical guideline and biomarker criteria using established evaluation measures (sensitivity, specificity, positive/negative predictive value, F-measure for the positive/negative preDM/DM classes, and Kappa). We also compared the performance of the guideline to those of machine learning (ML) based preDM/DM classifiers derived from the NHANES dataset. Approximately 29% of the 2858 youth in our study population had preDM/DM based on biomarker criteria. The clinical guideline had a sensitivity of 43.1% and specificity of 67.6%, positive/negative predictive values of 35.2%/74.5%, positive/negative F-measures of 38.8%/70.9%, and Kappa of 0.1 (95%CI: 0.06–0.14). The performance of the guideline varied across demographic subgroups. Some ML-based classifiers performed comparably to or better than the screening guideline, especially in identifying preDM/DM youth (p = 5.23 × 10−5).We demonstrated that a recommended pediatric clinical screening guideline did not perform well in identifying preDM/DM status among youth. Additional work is needed to develop a simple yet accurate screener for youth diabetes risk, potentially by using advanced ML methods and a wider range of clinical and behavioral health data.


Author(s):  
Giuseppe Vetrugno ◽  
Daniele Ignazio La Milia ◽  
Floriana D’Ambrosio ◽  
Marcello Di Pumpo ◽  
Roberta Pastorino ◽  
...  

Healthcare workers are at the forefront against COVID-19, worldwide. Since Fondazione Policlinico Universitario A. Gemelli (FPG) IRCCS was enlisted as a COVID-19 hospital, the healthcare workers deployed to COVID-19 wards were separated from those with limited/no exposure, whereas the administrative staff were designated to work from home. Between 4 June and 3 July 2020, an investigation was conducted to evaluate the seroprevalence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) immunoglobulin (IgG) antibodies among the employees of the FPG using point-of-care (POC) and venous blood tests. Sensitivity, specificity, and predictive values were determined with reverse-transcription polymerase chain reaction on nasal/oropharyngeal swabs as the diagnostic gold standard. The participants enrolled amounted to 4777. Seroprevalence was 3.66% using the POC test and 1.19% using the venous blood test, with a significant difference (p < 0.05). The POC test sensitivity and specificity were, respectively, 63.64% (95% confidence interval (CI): 62.20% to 65.04%) and 96.64% (95% CI: 96.05% to 97.13%), while those of the venous blood test were, respectively, 78.79% (95% CI: 77.58% to 79.94%) and 99.36% (95% CI: 99.07% to 99.55%). Among the low-risk populations, the POC test’s predictive values were 58.33% (positive) and 98.23% (negative), whereas those of the venous blood test were 92.86% (positive) and 98.53% (negative). According to our study, these serological tests cannot be a valid alternative to diagnose COVID-19 infection in progress.


Author(s):  
W. Leontiev ◽  
E. Magni ◽  
C. Dettwiler ◽  
C. Meller ◽  
R. Weiger ◽  
...  

Abstract Objectives The aim of the present study was to compare the accuracy of the conventional illumination method (CONV) and the fluorescence-aided identification technique (FIT) for distinguishing between composite restorations and intact teeth using different fluorescence-inducing devices commonly used for FIT. Materials and methods Six groups of six dentists equipped with one of six different FIT systems each independently attempted to identify composite restorations and intact teeth on a full-mouth model with 22 composite restorations using CONV and, 1 h later, FIT. The entire procedure was repeated 1 week later. Sensitivity, specificity, and positive (PPV) and negative (NPV) predictive values, including 95% confidence intervals (CI), were calculated for CONV and FIT overall and for each device. The influence of examiner age, method, and device on each parameter was assessed by multivariate analysis of variance. Results The sensitivity (84%, CI 81–86%), specificity (94%, CI 93–96%), PPV (92%, CI 90–94%), and NPV (90%, CI 88–91%) of FIT was significantly higher than that of CONV (47%, CI 44–50%; 82%, CI 79–84%; 66%, CI 62–69%, and 69%, CI 68–71%, respectively; p<0.001). The differences between CONV and FIT were significant for all parameters and FIT systems except VistaCam, which achieved no significant difference in specificity. Examiners younger than 40 years attained significantly higher sensitivity and negative predictive values than older examiners. Conclusions FIT is more reliable for detecting composite restorations than the conventional illumination method. Clinical relevance FIT can be considered an additional or alternative tool for improving the detection of composite restorations.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Mitnala Sasikala ◽  
Yelamanchili Sadhana ◽  
Ketavarapu Vijayasarathy ◽  
Anand Gupta ◽  
Sarala Kumari Daram ◽  
...  

Abstract Background A considerable amount of evidence demonstrates the potential of saliva in the diagnosis of COVID-19. Our aim was to determine the sensitivity of saliva versus swabs collected by healthcare workers (HCWs) and patients themselves to assess whether saliva detection can be offered as a cost-effective, risk-free method of SARS-CoV-2 detection. Methods This study was conducted in a hospital involving outpatients and hospitalized patients. A total of 3018 outpatients were tested. Of these, 200 qRT-PCR-confirmed SARS-CoV-2-positive patients were recruited for further study. In addition, 101 SARS-CoV-2-positive hospitalized patients with symptoms were also enrolled in the study. From outpatients, HCWs collected nasopharyngeal swabs (NPS), saliva were obtained. From inpatients, HCWs collected swabs, patient-collected swabs, and saliva were obtained. qRT-PCR was performed to detect SARS-CoV-2 by TAQPATH assay to determine the sensitivity of saliva detection. Sensitivity, specificity and positive/negative predictive values (PPV, NPV) of detecting SARS-CoV-2 were calculated using MedCalc. Results Of 3018 outpatients (asymptomatic: 2683, symptomatic: 335) tested by qRT-PCR, 200 were positive (males: 140, females: 60; aged 37.9 ± 12.8 years; (81 asymptomatic, 119 symptomatic). Of these, saliva was positive in 128 (64%); 39 of 81 asymptomatic (47%),89 of 119 symptomatic patients (74.8%). Sensitivity of detection was 60.9% (55.4–66.3%, CI 95%), with a negative predictive value of 36%(32.9–39.2%, CI 95%).Among 101 hospitalized patients (males:65, females: 36; aged 53.48 ± 15.6 years), with HCW collected NPS as comparator, sensitivity of saliva was 56.1% (47.5–64.5, CI 95%), specificity 63.5%(50.4–75.3, CI95%) with PPV of 77.2% and NPV of 39.6% and that of self-swab was 52.3%(44–60.5%, CI95%), specificity 56.6% (42.3–70.2%, CI95%) with PPV 77.2% and NPV29.7%. Comparison of positivity with the onset of symptoms revealed highest detection in saliva on day 3 after onset of symptoms. Additionally, only saliva was positive in 13 (12.8%) hospitalized patients. Conclusion Saliva which is easier to collect than nasopharyngeal swab is a viable alternate to detect SARS-COV-2 in symptomatic patients in the early stage of onset of symptoms. Although saliva is currently not recommended for screening asymptomatic patients, optimization of collection and uniform timing of sampling might improve the sensitivity enabling its use as a screening tool at community level.


Author(s):  
Kazutaka Uchida ◽  
Junichi Kouno ◽  
Shinichi Yoshimura ◽  
Norito Kinjo ◽  
Fumihiro Sakakibara ◽  
...  

AbstractIn conjunction with recent advancements in machine learning (ML), such technologies have been applied in various fields owing to their high predictive performance. We tried to develop prehospital stroke scale with ML. We conducted multi-center retrospective and prospective cohort study. The training cohort had eight centers in Japan from June 2015 to March 2018, and the test cohort had 13 centers from April 2019 to March 2020. We use the three different ML algorithms (logistic regression, random forests, XGBoost) to develop models. Main outcomes were large vessel occlusion (LVO), intracranial hemorrhage (ICH), subarachnoid hemorrhage (SAH), and cerebral infarction (CI) other than LVO. The predictive abilities were validated in the test cohort with accuracy, positive predictive value, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and F score. The training cohort included 3178 patients with 337 LVO, 487 ICH, 131 SAH, and 676 CI cases, and the test cohort included 3127 patients with 183 LVO, 372 ICH, 90 SAH, and 577 CI cases. The overall accuracies were 0.65, and the positive predictive values, sensitivities, specificities, AUCs, and F scores were stable in the test cohort. The classification abilities were also fair for all ML models. The AUCs for LVO of logistic regression, random forests, and XGBoost were 0.89, 0.89, and 0.88, respectively, in the test cohort, and these values were higher than the previously reported prediction models for LVO. The ML models developed to predict the probability and types of stroke at the prehospital stage had superior predictive abilities.


Sign in / Sign up

Export Citation Format

Share Document