scholarly journals Diagnosis of subclinical ketosis in dairy cows

2019 ◽  
Vol 35 (2) ◽  
pp. 111-125
Author(s):  
Radojica Djokovic ◽  
Zoran Ilic ◽  
Vladimir Kurcubic ◽  
Milos Petrovic ◽  
Marko Cincovic ◽  
...  

Ketosis is a common disease in high producing dairy cows during the early lactation period. Subclinical ketosis (SCK) and periparturient diseases considerably account for economic and welfare losses in dairy cows. Subclinical ketosis poses an increased risk of production-related diseases such as clinical ketosis, displaced abomasum, retained placenta, lameness, mastitis and metritis. Production efficiency decreases (lower milk production, poor fertility, and increased culling rates), which results in economic losses. Increased concentrations of circulating ketone bodies, predominantly ?-hydroxybutyrate (BHB), without the presence of clinical signs of ketosis are considered as SCK. It is characterized by increased levels of ketone bodies in the blood, urine and milk. The gold standard test for ketosis is blood BHB. This ketone body is more stable in blood than acetone or acetoacetate. The most commonly used cut-points for subclinical ketosis are 1.2 mmol/L or 1.4 mmol/L for BHB in the blood. Clinical ketosis generally involves much higher levels of BHB, about 3.0 mmol/L or more. Usually, detection of SCK is carried out by testing ketone body concentrations in blood, urine and milk. A variety of laboratory and cowside tests are available for monitoring ketosis in dairy herds. But no cowside test has perfect sensitivity and specificity compared to blood BHB as the gold standard test. The aim of this review is to overview diagnostic tests for SCK in dairy cows, including laboratory and cowside tests.

2017 ◽  
Vol 34 ◽  
pp. 101-106
Author(s):  
A.K. Sah ◽  
R. Bastola ◽  
Y.R. Pandeya ◽  
L. Pathak ◽  
M.P. Acharya ◽  
...  

Present study was carried out for the accuracy of commercially available progesterone ELISA kit at NCRP farm in the fiscal year 2015/16. Twenty crossbred Jersey and Holstein dairy cows were selected at different time periods of post insemination. Blood serum was collected in those animals and progesterone was quantified with the commercially available progesterone ELISA kit. Pregnancy diagnosis was performed by rectal palpation and Ultrasonography (USG) as a Gold standard and compared for the accuracy to ELISA kit. Results of the ELISA kit revealed the accuracy of the kit to be only 80 % with high sensitivity 92 % and very low specificity 57 % at 95 % confidence interval. Out of twenty artificially inseminated cows, thirteen were pregnant and seven were non-pregnant by the Gold standard test with their significantly different mean progesterone (at P < 0.05) 8.93±1.10 ng/ml and 4.36±1.21 ng/ml respectively. Hence, it can be used in the early pregnancy diagnosis at only after 24 days of the insemination, however, progesterone quantification by ELISA is not the confirmatory tests for the pregnancy diagnosis as this results accuracy of only 80 %.


2020 ◽  
Vol 18 (1) ◽  
Author(s):  
Tess Baker ◽  
David B. Wolfson

Background: Shortly after the introduction of the first licensed vaccine against dengue fever (Dengvaxia), a serious outcome was attributed to the vaccine: vaccinated individuals without a previous dengue infection were at increased risk of developing severe dengue if subsequently infected by a heterologous serotype. In response, the World Health Organization recommended vaccination in regions where the seroprevalence of dengue is at least 50% and, ideally, greater than 70%. Hence, accurate estimates of regional seroprevalence are crucial for both population vaccination strategies and test-then-vaccinate decisions at the individual level. Currently, estimates of seroprevalence are based on surveys, using screening tests for previous dengue exposure. These estimates must consider the sensitivity and specificity of the tests, which depend on identification of those who have been exposed, ostensibly through a test, regarded as the gold standard. There is, however, no easily accessible gold standard test for dengue. Methods: We propose an approach to estimate the seroprevalence of dengue that does not require a gold standard test by modeling: (i) the uncertainty in the sensitivity and specificity, and (ii) the uncertainty in the “true” disease prevalence. Results: Through simulations, we demonstrate the effect of these extra sources of uncertainty on post-test estimates of dengue seroprevalence. Our simulations show, for example, that in a population of 1 million it is possible to overestimate or underestimate the number who are truly seropositive by as much as 76,000. Conclusions: Current estimates can substantially overestimate or underestimate the true probability of previous exposure when these extra sources of variability are not accounted for.


2021 ◽  
pp. jim-2021-001962
Author(s):  
James H Clark ◽  
Sharon Pang ◽  
Robert M Naclerio ◽  
Matthew Kashima

Transnasal swab testing for the detection of SARS-CoV-2 is well established. The Centers for Disease Control and Prevention advocates swabbing either of the anterior nares, middle turbinate, or nasopharynx for specimen collection depending on available local resources. The purpose of this review is to investigate complications related to transnasal SARS-CoV-2 testing with specific attention to specimen collection site and swab approach. The literature demonstrates that while nasopharyngeal swabbing is associated with an increased risk of complications, it should remain the gold-standard test due to greater diagnostic accuracy relative to anterior nasal and middle turbinate swabs.


2021 ◽  
Vol 1 (1) ◽  
pp. 145-151

The article presents the results of a study of the sensitivity and specificity of two types of ketone test strips for detecting latent ketosis in dairy cows. During the experiment, test strips KetoPHAN and Keto-Test, showing the number of ketone bodies in milk and urine, were used in 108 Holstein-Friesian cows for 2-15 days after calving. Blood, urine and milk samples were taken simultaneously from the same cows. In general, subclinical ketosis in cows was determined by the level of beta-hydroxybutyrate in blood plasma by an enzymatic method («gold standard» method). The sensitivity and specificity of the test tests were evaluated at various levels of β-hydroxybutyrate. The maximum level of subclinical ketosis in cows was calculated as a β-hydroxybutyrate concentration of 1.2 mmol / L or higher. When evaluating test strips on urine and milk samples, the best results were obtained in the β-hydroxybutyrate range of 1.4 mmol / L and above. In this range, the sensitivity of the test strips to urine was high (95%), the specificity was moderate (70%). The sensitivity and specificity of milk tests were high (90% and 96%, respectively). The prevalence of subclinical ketosis in cows was studied using an electronic device (FreeStyle Optium). The prevalence of latent ketosis was different: FreeStyle - 25.0%, KetoPHAN - 48.1%, Keto-Test - 13.1%. The data obtained from the electronic device is close to the data obtained according to the "gold standard" (22.2%). Although milk and urine test strips are optimal for the diagnosis of subclinical ketosis at a plasma β-hydroxybutyrate level of 1.4 mmol / L, they can be used to predict and control ketosis in dairy cows due to their simplicity and availability.


2009 ◽  
Vol 11 (10) ◽  
pp. 881-884 ◽  
Author(s):  
Annamaria Pratelli ◽  
Kadir Yesilbag ◽  
Marcello Siniscalchi ◽  
Ebru Yalçm ◽  
Zeki Yilmaz

Feline sera from Bursa province (Turkey) were assayed for coronavirus antibody using an enzyme-linked immunosorbent assay (ELISA). The study was performed on 100 sera collected from cats belonging to catteries or community shelters and to households. The serum samples were initially tested with the virus neutralisation (VN) test and the results were then compared with the ELISA. The VN yielded 79 negative and 21 positive sera but the ELISA confirmed only 74 as negative. The ELISA-negative sera were also found to be free of feline coronoviruses-specific antibodies by Western blotting. Using the VN as the gold standard test, ELISA had a sensitivity of 100% and a specificity of 93.6%, with an overall agreement of 95%. The Kappa (κ) test indicated high association between the two tests (κ=0.86, 95% confidence interval (CI) 0.743–0.980). The positive predictive value (PPV) was 0.8, and the negative predictive value (NPV) was 0.93. The prevalence of FCoV II antibodies in the sampled population based on the gold standard was 62% (95% CI 0.44–0.77) among multi-cat environments, and 4% (95% CI 0.01–0.11) among single cat households.


2014 ◽  
Vol 9 (2) ◽  
pp. 45-53
Author(s):  
S Hossain ◽  
A Ghosh ◽  
A Chatterjee ◽  
G Sarkar ◽  
SS Mondal

Objective: This study was done to evaluate the diagnostic value of protein: creatinine ratio in a single voided urine sample for quantitation of proteinuria compared to those of a 24 hour urine sample in patients with preeclampsia. Methods: A prospective simple random sample study was done on the hypertensive pregnant women attending the antenatal clinic or admitted in Department of Obstetrics and Gynaecology. It included all women being evaluated for preeclampsia, regardless of the alerting sign or symptom, suspected severity or co-morbid conditions. The main measures were the urinary protein to urinary creatinine ratio by random (spot) direct measurement and the 24-h urinary protein excretion by a 24-h urine collection. The data obtained was statistically analyzed. Results: Out of the 78 patients with gestational hypertension included in our study 48 patients had significant proteinuria (e”300mg/day). Only 2 patients had proteinuria of the range of greater than 3500mg. Among the patients, 50 had a positive protein: creatinine ratio (e”0.3) while 28 patients had a negative protein: creatinine ratio (<0.3). The P: C ratio was able to correctly identify 44 out of 48 patients with significant proteinuria (when the comparison is made with the gold-standard test; i.e., 24-hour urine protein). It could also identify 24 out of 30 patients without significant proteinuria as compared to the gold-standard test. In this study, the Protein: Creatinine ratio with a sensitivity of 91.67%, a specificity of 80%, positive predictive values 88% and the negative predictive values 85.71%. Conclusions: Our data suggests that the protein: creatinine ratio in single voided urine is a highly accurate test (p value < 0.0000001) for discriminating between insignificant and significant proteinuria. Based on the above findings we conclude that a random urine protein excretion predicts the amount of 24- hour urine protein excretion with high accuracy. This could be a reasonable alternative to the 24-hour urine collection for detection of significant proteinuria in hospitalised pregnant women with suspected preeclampsia. Journal of College of Medical Sciences-Nepal, 2013, Vol-9, No-2, 45-53 DOI: http://dx.doi.org/10.3126/jcmsn.v9i2.9687


2013 ◽  
Vol 31 (31_suppl) ◽  
pp. 89-89 ◽  
Author(s):  
Yvonne Sada ◽  
Eric David ◽  
Hashem El-Serag ◽  
Hardeep Singh ◽  
Jessica Davila

89 Background: The incidence of hepatocellular cancer (HCC) is rising. Practice guidelines provide the recommended approach for HCC diagnosis, but adherence to diagnostic guidelines is unknown. Methods: In a national sample of veterans with confirmed HCC, we performed a retrospective chart review of patients with cirrhosis and a new liver mass on imaging between 2005 and 2011. Clinical data was used to assess adherence to American Association for the Study of Liver Diseases guidelines. Patients with inadequate data to assess guideline adherence (missing liver mass size, imaging technique, or diagnostic report) were excluded. We identified factors that contributed to guideline non-adherence. Initial liver mass date was the first date a liver mass was reported on imaging (CT, MRI, or ultrasound). Gold standard test date was the date a diagnosis of HCC could have been made by guideline recommended testing and criteria. Diagnosis date was the date a provider documented the diagnosis. Results: We reviewed charts for 380 patients. Overutilization of diagnostic tests after a gold standard test occurred in 112 patients (31%), and 17 (4%) had insufficient tests. Guidelines were not followed in 124 (33%). Of these 124, 68 (55%) had liver masses that increased in size during diagnostic work-up. The most common factors associated with guideline non-adherence were unnecessary testing such as biopsy after a gold standard image (43%) and the presence of a contraindication to a guideline recommended image or biopsy (12%). Patient factors (missed appointments, declining work-up) accounted for only 3% of cases. Median time between the initial liver mass and gold standard test was 15 days (IQR: 0-99). Median time between the initial liver mass and diagnosis was 50 days (IQR: 12-191). Most diagnoses were made by gastroenterology (51%), followed by primary care (19%), and oncology (10%). Conclusions: One-third of patients with HCC were not diagnosed based on guidelines. These concerns include confidence in diagnosis (lack of recognizing HCC diagnosis despite gold standard evidence) and over testing, which both lead to diagnostic delay. Our findings warrant further evaluation of contributory factors to develop interventions that improve the diagnostic process for HCC.


Sign in / Sign up

Export Citation Format

Share Document