scholarly journals Human chorionic gonadotropin cutoff value determined by receiver operating characteristic curve analysis is useful but not absolute for determining pregnancy outcomes

2016 ◽  
Vol 21 (2) ◽  
pp. 120-124
Author(s):  
Maysa M. Khadra ◽  
Mazen A. Freij ◽  
Muataz Q. Al-Ramahi ◽  
Abdullah Y. Al-jamal ◽  
Fida M. Thekrallah ◽  
...  
2019 ◽  
Vol 30 (7-8) ◽  
pp. 221-228
Author(s):  
Shahab Hajibandeh ◽  
Shahin Hajibandeh ◽  
Nicholas Hobbs ◽  
Jigar Shah ◽  
Matthew Harris ◽  
...  

Aims To investigate whether an intraperitoneal contamination index (ICI) derived from combined preoperative levels of C-reactive protein, lactate, neutrophils, lymphocytes and albumin could predict the extent of intraperitoneal contamination in patients with acute abdominal pathology. Methods Patients aged over 18 who underwent emergency laparotomy for acute abdominal pathology between January 2014 and October 2018 were randomly divided into primary and validation cohorts. The proposed intraperitoneal contamination index was calculated for each patient in each cohort. Receiver operating characteristic curve analysis was performed to determine discrimination of the index and cut-off values of preoperative intraperitoneal contamination index that could predict the extent of intraperitoneal contamination. Results Overall, 468 patients were included in this study; 234 in the primary cohort and 234 in the validation cohort. The analyses identified intraperitoneal contamination index of 24.77 and 24.32 as cut-off values for purulent contamination in the primary cohort (area under the curve (AUC): 0.73, P < 0.0001; sensitivity: 84%, specificity: 60%) and validation cohort (AUC: 0.83, P < 0.0001; sensitivity: 91%, specificity: 69%), respectively. Receiver operating characteristic curve analysis also identified intraperitoneal contamination index of 33.70 and 33.41 as cut-off values for feculent contamination in the primary cohort (AUC: 0.78, P < 0.0001; sensitivity: 87%, specificity: 64%) and validation cohort (AUC: 0.79, P < 0.0001; sensitivity: 86%, specificity: 73%), respectively. Conclusions As a predictive measure which is derived purely from biomarkers, intraperitoneal contamination index may be accurate enough to predict the extent of intraperitoneal contamination in patients with acute abdominal pathology and to facilitate decision-making together with clinical and radiological findings.


2021 ◽  
Vol 9 (B) ◽  
pp. 1561-1564
Author(s):  
Ngakan Ketut Wira Suastika ◽  
Ketut Suega

Introduction: Coronavirus disease 2019 (Covid-19) can cause coagulation parameters abnormalities such as an increase of D-dimer levels especially in severe cases. The purpose of this study is to determine the differences of D-dimer levels in severe cases of Covid-19 who survived and non-survived and determine the optimal cut-off value of D-dimer levels to predict in-hospital mortality. Method: Data were obtained from confirmed Covid-19 patients who were treated from June to September 2020. The Mann-Whitney U test was used to determine differences of D-dimer levels in surviving and non-surviving patients. The optimal cut-off value and area under the curve (AUC) of the D-dimer level in predicting mortality were obtained by the receiver operating characteristic curve (ROC) method. Results: A total of 80 patients were recruited in this study. Levels of D-dimer were significantly higher in non-surviving patients (median 3.346 mg/ml; minimum – maximum: 0.939 – 50.000 mg/ml) compared to surviving patients (median 1.201 mg/ml; minimum – maximum: 0.302 – 29.425 mg/ml), p = 0.012. D-dimer levels higher than 1.500 mg/ml are the optimal cut-off value for predicting mortality in severe cases of Covid-19 with a sensitivity of 80.0%; specificity of 64.3%; and area under the curve of 0.754 (95% CI 0.586 - 0.921; p = 0.010). Conclusions: D-dimer levels can be used as a predictor of mortality in severe cases of Covid-19.


2020 ◽  
pp. 263208432097225
Author(s):  
Ruwanthi Kolamunnage-Dona ◽  
Adina Najwa Kamarudin

The performance of a biomarker is defined by how well the biomarker is capable to distinguish between healthy and diseased individuals. This assessment is usually based on the baseline value of the biomarker; the value at the earliest time point of the patient follow-up, and quantified by ROC (receiver operating characteristic) curve analysis. However, the observed baseline value is often subjected to measurement error due to imperfect laboratory conditions and limited machine precision. Failing to adjust for measurement error may underestimate the true performance of the biomarker, and in a direct comparison, useful biomarkers could be overlooked. We develop a novel approach to account for measurement error when calculating the performance of the baseline biomarker value for future survival outcomes. We adopt a joint longitudinal and survival data modelling formulation and use the available longitudinally repeated values of the biomarker to make adjustment of the measurement error in time-dependent ROC curve analysis. Our simulation study shows that the proposed measurement error-adjusted estimator is more efficient for evaluating the performance of the biomarker than estimators ignoring the measurement error. The proposed method is illustrated using Mayo Clinic primary biliary cirrhosis (PBC) study.


2005 ◽  
Vol 95 (6) ◽  
pp. 679-691 ◽  
Author(s):  
William W. Turechek ◽  
Wayne F. Wilcox

Apple scab (Venturia inaequalis) is a perennial threat to apple production in temperate climates throughout the world. In the eastern United States, apple scab is managed almost exclusively through the regular application of fungicides. Management of the primary phase of disease is focused on preventing infection by ascospores. Management of secondary cycles of infection is largely dependent on how well primary infections were controlled. In this study, we used receiver operating characteristic curve analysis to evaluate how well mid-season assessments of the incidence of apple scab on cluster leaves, clusters (i.e., the whorl of cluster leaves), or immature fruit can serve as predictors of apple scab on harvested fruit (harvest scab) and whether these mid-season assessments of scab could be used reliably to manage scab under various damage thresholds. Results showed that assessment of scab on immature fruit was superior at predicting harvest scab than were assessments made on clusters or cluster leaves at all damage thresholds evaluated. A management action threshold of 7% scab incidence on immature fruit was identified by Youden's index as the optimal action threshold to prevent harvest scab incidence from exceeding 5%. Action thresholds could be higher or lower than 7% when economic assumptions were factored in to the decision process. The utility of such a predictor is discussed.


1995 ◽  
Vol 7 (4) ◽  
pp. 488-493 ◽  
Author(s):  
Raymond W. Sweeney ◽  
Robert H. Whitlock ◽  
Carol L. Buckley ◽  
Pam A. Spencer

The performance of a commercially available ELISA for detection of antibodies to Mycobacterium paratuberculosis was evaluated using sera from 1,146 cows. Samples were from uninfected cattle, infected subclinical cattle shedding low numbers of organism in feces, subclinical heavy shedders, clinical cases, and randomly selected cattle in a slaughterhouse survey for paratuberculosis. The overall sensitivity of the test, using the manufacturer's recommended cutoff was 45% ± 4.8%, and the specificity was 99% ± 0.9%. The ELISA result was significantly correlated with the number of colonies of M. paratuberculosis detected by fecal culturing. The sensitivity of the test was highest for clinical cases of paratuberculosis (87% ± 8.4%), and lowest for subclinical, light-shedding cattle (15% ± 6.6%). Changing the cutoff point did not improve performance of the test. Evaluating ELISA results with a kinetics-based method reduced plate-to-plate variation in results but did not improve performance of the test based on receiver-operating characteristic curve analysis.


Sign in / Sign up

Export Citation Format

Share Document