Anti-melanoma differentiation-associated gene 5 (MDA5) antibody-positive dermatomyositis (DM)-associated interstitial lung disease (ILD) may progress rapidly and lead to high mortality within 6 or 12 months. Except for reported prognostic factors, simple but powerful prognostic biomarkers are still in need in practice. In this study, we focused on circulating monocyte and lymphocyte counts and their variation tendency in the early stage of ILD. A total of 351 patients from two inception anti-MDA5 antibody-positive cohorts were included in this study, with various treatment choices. Lymphocyte count remained lower in the first month after admission in the non-survivor patients. Although baseline monocyte count showed no significant differences, average monocyte count in the following 4 weeks was also lower in the non-survivor group. Based on the C-index and analysis by the “survminer” R package in the discovery cohort, we chose 0.24 × 109/L as the cutoff value for Mono W0-2, 0.61 × 109/L as the cutoff value for lymph W0-2, and 0.78 × 109/L as the cutoff value for peripheral blood mononuclear cell (PBMC) W0-2, to predict the 6-month all-cause mortality. The Kaplan–Meier survival curves and adjusted hazard ratio with age, gender, and the number of immunosuppressants used all validated that patients with lower average monocyte count, lower average lymphocyte count, or lower average PBMC count in the first 2 weeks after admission had higher 6-month death risk, no matter in the validation cohort or in the pooled data. Furthermore, flow cytometry figured out that non-classical monocytes in patients with anti-MDA5 antibody-positive DM were significantly lower than healthy controls and patients with DM without anti-MDA5 antibodies. In conclusion, this study elucidated the predictive value of monocyte and lymphocyte counts in the early stage and may help rheumatologists to understand the possible pathogenesis of this challenging disease.
To explore the relationship between the strength of posterior cervical extensors (PCEs) and cervical sagittal alignment in Hirayama disease (HD) patients.
We analyzed the (magnetic resonance imaging) MRI T2WI and X-rays of 60 HD patients who visited Huashan Hospital from June 2017 to February 2020. Symptoms of these patients include adolescent onset, manifestation of unilateral upper limb muscle weakness and muscle atrophy of the forearm and hand. MRI images were used to measure (the cross-sectional area) CSA of cervical PCEs. The ratio of muscle CSA to vertebral body areas at the same level is defined as R-CSA. Cervical sagittal alignment includes the C2–7 Cobb angle, T1 slope and C2–7 sagittal vertical axis (SVA). The geometric center of the C3–6 vertebral body was determined using the line connecting the C2 inferior endplate and the C7 upper endplate. When located behind the line, it is defined as a “local kyphotic deformity.” The number of vertebral bodies involved in kyphotic deformity was determined by measuring the local kyphosis angle (LKA). Spearman correlation analysis (α = 0.05) was used to determine the relationship between R-CSA and sagittal parameters. ROC curves were used to analyze the sensitivity and specificity of relevant variables.
Spearman correlation test revealed that R-CSA negatively correlated with T1S (S = 0.34, r = 0.34, p = 0.01) and LKA (S = 0.44, r = 0.5, p = 0.01), but did not correlate with the C2-C7 Cobb angle (S = 0.20, p = 0.12) or C2-C7 SVA (S = − 0.17, p = 0.46). (p < 0.05). ROC curve analysis showed that the areas under the curve (AUCs) of the T1 slope and LKA was 0.6696 and 0.7646, respectively. T1 slope, cutoff value: 17.2°; sensitivity: 0.5806; specificity: 0.7241; p < 0.05. LKA: cutoff value: − 14°; sensitivity: 1; specificity: 0.5333; p < 0.05.
In patients with Hirayama disease, the strength of posterior cervical extensors and cervical sagittal alignment are closely related. The local kyphosis angle can be used as a reference for the strength of posterior cervical extensors. These results indicate the weakness of PCEs, which may predispose the cervical spine of HD patients to a less stable situation. Therefore, patients with Hirayama disease should strengthen the exercise of the PCEs.
Objective: To explore inter-observer agreement on the evaluation of automated breast volume scanner (ABVS) for breast masses. Methods: A total of 846 breast masses in 630 patients underwent ABVS examinations. The imaging data were independently interpreted by senior and junior radiologists regarding the mass size ([Formula: see text][Formula: see text]cm, [Formula: see text][Formula: see text]cm and total). We assessed inter-observer agreement of BI-RADS lexicons, unique descriptors of ABVS coronal planes. Using BI-RADS 3 or 4a as a cutoff value, the diagnostic performances for 331 masses with pathological results in 253 patients were assessed. Results: The overall agreements were substantial for BI-RADS lexicons ([Formula: see text]–0.779) and the characteristics on the coronal plane of ABVS ([Formula: see text]), except for associated features ([Formula: see text]). However, the overall agreement was moderate for orientation ([Formula: see text]) for the masses [Formula: see text][Formula: see text]cm. The agreements were substantial to be perfect for categories 2, 3, 4, 5 and overall ([Formula: see text]–0.918). However, the agreements were moderate to substantial for categories 4a ([Formula: see text]), 4b ([Formula: see text]), and 4c ([Formula: see text]), except for category 4b of the masses [Formula: see text][Formula: see text]cm ([Formula: see text]). Moreover, for radiologists 1 and 2, there were no significant differences in sensitivity, specificity, accuracy, positive and negative predictive values with BI-RADS 3 or 4a as a cutoff value ([Formula: see text] for all). Conclusion: ABVS is a reliable imaging modality for the assessment of breast masses with good inter-observer agreement.
Nutritional risk index (NRI) is an index based on ideal body weight that aims to present body weight and serum albumin levels. It has been utilized to discriminate patients at risk of postoperative complications and predict the postoperative outcome of major surgeries. However, this index remains limited for breast cancer patients treated with neoadjuvant chemotherapy (NACT). The research explores the clinical and prognostic significance of NRI in breast cancer patients. This study included 785 breast cancer patients (477 cases received NACT and 308 cases did not) were enrolled in this retrospective study. The optimal NRI cutoff value was evaluated by receiver operating characteristic (ROC) curve, then reclassified as low NRI group (<112) and high NRI group (≥112). The results demonstrated that NRI independently predicted survival on disease-free survival (DFS) and overall survival (OS) by univariate and multivariate Cox regression survival analyses [P = 0.019, hazard ratio (HR): 1.521, 95% CI: 1.071–2.161 and P = 0.004, HR: 1.415, 95% CI: 1.119–1.789; and P = 0.026, HR:1.500, 95% CI: 1.051–2.143 and P < 0.001, HR: 1.547, 95% CI: 1.221–1.959]. According to the optimal cutoff value of NRI, the high NRI value patients had longer mean DFS and OS time in contrast to those with low NRI value patients (63.47 vs. 40.50 months; 71.50 vs. 56.39 months). Furthermore, the results demonstrated that the high NRI score patients had significantly longer mean DFS and OS time than those with low NRI score patients in early-stage breast cancer (χ2 = 9.0510, P = 0.0026 and χ2 = 9.2140, P = 0.0024) and advanced breast cancer (χ2 = 6.2500, P = 0.0124 and χ2 = 5.8880, P = 0.0152). The mean DFS and OS values in patients with high NRI scores were significantly longer in contrast to those with low NRI scores in different molecular subtypes. The common toxicities after NACT were hematologic and gastrointestinal reactions, and the NRI had no statistically significant effects on toxicities, except in nausea (χ2 = 9.2413, P = 0.0024), mouth ulcers (χ2 = 4.8133, P = 0.0282), anemia (χ2 = 8.5441, P = 0.0140), and leukopenia (χ2 = 11.0951, P = 0.0039). NRI serves as a minimally invasive, easily accessible and convenient prognostic tool for evaluating breast cancer prognoses and treatment efficacy, and may help doctors in terms of selecting measures of greater efficiency or appropriateness to better treat breast cancer.
The combination of hemoglobin, albumin, lymphocyte, and platelet (HALP) score has been confirmed as an important risk biomarker in several cancers. Hence, we aimed at evaluating the prognostic value of the HALP score in patients with non-metastatic upper tract urothelial carcinoma (UTUC). We retrospectively enrolled 533 of the 640 patients from two centers (315 and 325 patients, respectively) who underwent radical nephroureterectomy (RNU) for UTUC in this study. The cutoff value of HALP was determined using the Youden index by performing receiver operating characteristic (ROC) curve analysis. The relationship between postoperative survival outcomes and preoperative HALP level was assessed using Kaplan-Meier analysis and Cox regression analysis. As a result, the cutoff value of HALP was 28.67 and patients were then divided into HALP<28.67 group and HALP≥28.67 group. Kaplan-Meier analysis and log-rank test revealed that HALP was significantly associated with overall survival (OS) (P<0.001) and progression-free survival (PFS) (P<0.001). Multivariate analysis demonstrated that lower HALP score was an independent risk factor for OS (HR=1.54, 95%CI, 1.14-2.01, P=0.006) and PFS (HR=1.44, 95%CI, 1.07-1.93, P=0.020). Nomograms of OS and PFS incorporated with HALP score were more accurate in predicting prognosis than without. In the subgroup analysis, the HALP score could also stratify patients with respect to survival under different pathologic T stages. Therefore, pretreatment HALP score was an independent prognostic factor of OS and PFS in UTUC patients undergoing RNU.
BackgroundSerum carcinoembryonic antigen (CEA) is an important biomarker for diagnosis, prognosis, recurrence, metastasis monitoring, and the evaluation of the effect of chemotherapy in colorectal cancer (CRC). However, few studies have focused on the role of early postoperative CEA in the prognosis of stage II CRC.MethodsPatients with stage II CRC diagnosed between January 2007 and December 2015 were included. Receiver operating characteristic (ROC) curves were used to obtain the cutoff value of early postoperative CEA, CEA ratio and CEA absolute value. The areas under curves (AUCs) were used to estimate the predictive abilities of the CEA and T stage. The stepwise regression method was used to screen the factors included in the Cox regression analysis. Before and after propensity score (PS) - adjusted Cox regression and sensitivity analysis were used to identify the relationship between early postoperative CEA and prognosis. Meta-analysis was performed to verify the results. Kaplan-Meier survival curves were used to estimate the effects of CEA on prognosis.ResultsWe included 1081 eligible patients. ROC curves suggested that the cutoff value of early postoperative CEA was 3.66 ng/ml (P <0.001) and the AUC showed early postoperative CEA was the most significant prognostic marker in stage II CRC (P = 0.0189). The Cox regression and sensitivity analysis before and after adjusting for PS both revealed elevated early postoperative CEA was the strongest independent prognostic factor of OS, DFS, and CSS (P < 0.001). Survival analysis revealed that patients with elevated early postoperative CEA had lower OS (53.62% VS 84.16%), DFS (50.03% VS 86.75%), and CSS (61.77% VS 90.30%) than patients with normal early postoperative CEA (P < 0.001). When the postoperative CEA was positive, the preoperative CEA level showed no significant effect on the patient’s prognosis (all P-values were > 0.05). Patients with a CEA ratio ≤0.55 or CEA absolute value ≤-0.98 had a worse prognosis (all P-values were < 0.001). Survival analysis suggested that adjuvant chemotherapy for stage II CRC patients with elevated early postoperative CEA may improve the CSS (P = 0.040).ConclusionsEarly postoperative CEA was a better biomarker for prognosis of stage II CRC patients than T stage and preoperative CEA, and has the potential to become a high-risk factor to guide the prognosis and treatment of stage II CRC patients.
To determine the role tear lymphotoxin-α (LT-α) in chronic ocular graft-versus-host disease (oGVHD).
Twenty-two chronic oGVHD and 17 control tear samples were collected, and commercial test strips were used to detect LT-α concentrations. Concentration differences between patients with and without oGVHD were determined via Mann-Whitney U test. The correlation between LT-α levels and ophthalmic parameters was analyzed using Spearman’s test.
The concentration of LT-α was significantly lower in oGVHD patients than in controls. LT-α levels were significantly correlated with OSDI, NIH eye score, T-BUT, and CFS among all participants. ROC analysis revealed that the area under the curve of LT-α was 0.847, and the cutoff value for chronic oGVHD diagnosis was 0.203 ng/mL.
Our study revealed the significant decrease of tear LT-α in oGVHD, and suggested LT-α as a promising marker for chronic oGVHD diagnosis.
Background: Patients under maintenance hemodialysis are at increased risk of malnutrition, causing from multitude of factors. Present study aims to assess the prevalence of malnutrition among maintenance hemodialysis patients using both modified subjective global assessment score and body mass index, compare them and assess the sensitivity and specificity of body mass index for detecting malnutrition, along with determining a new cutoff value for BMI that better represent the maintenance hemodialysis patient’s nutritional status.
Methods: This was a cross-sectional study conducted in the hemodialysis unit of Bangabandhu Sheikh Mujib Medical University, Sir Salimullah Medical College Mitford Hospital, BIRDEM General Hospital and National Institute of Kidney Diseases & Urology; among 80 adult CKD patients who were on regular (≥2 sessions per week) maintenance hemodialysis for more than 3 months without any acute infection, during the period of July 2016 to June 2017. Nutritional assessment was done for each patient using modified SGA score along with BMI. Sensitivity analysis of WHO recommended cutoff value for BMI was done among the study population using modified SGA score as gold standard test for detection of malnutrition among the respondents. ROC curve was used to estimate the best fitting cutoff value of BMI that showed highest sensitivity, specificity and accuracy for detracting malnutrition among maintenance hemodialysis patients.
Results: The study participants were predominantly male (66.3%) and from age group 45 to 59 years (36.3%). Modified SGA score detected 90.0% of the study population as malnourished. WHO recommended 18.5 kg/m2 cutoff value was also used to detect malnutrition among study population and 13.8% were found to be malnourished, with a sensitivity and specificity of 12.5% and 75.0% respectively. Accuracy was found to be 18.8%. Using ROC curve, 23.1 kg/m2 was found to be the best fitting cutoff value of BMI for the study population to detect malnutrition. With a sensitivity of 47.2%, specificity of 37.5% and accuracy of 46.3%.
Conclusion: BMI showed low sensitivity for detecting malnutrition among patients under maintenance hemodialysis, compared to modified SGA score and should be avoided as a screening tool, but 23.1 kg/m2 cutoff value for BMI showed potential to be used as an easy to use and quick tool for detecting malnutrition among such patients. Further study with larger sample size could shed more light on this.
JOPSOM 2021; 40(1): 14-21
Background: Hematuria is one of the most common and early signs of diseases related to genitourinary system and can be classified as either glomerular or non-glomerular in origin. Percentage of dysmorphic RBC (%dRBC) in urine has been in practice as a diagnostic tool for differentiating glomerular from non-glomerular hematuria. Recent studies indicate that, urinary albumin-total protein ratio (uAPR) can also be used as a diagnostic tool in this regard. This study aimed to evaluate the sensitivity and specificity of uAPR as a diagnostic tool for detecting glomerular hematuria in comparison to %dRBC in urine.
Methods: This cross-sectional study enrolled 96 patients with hematuria. Fresh urine samples were collected from each subject to determine the %dRBC and to calculate uAPR. Receiver operating characteristic curve analysis was done on these results to evaluate the sensitivity and specificity of uAPR and %dRBC in differentiating glomerular from non-glomerular hematuria.
Results: uAPR and %dRBC were significantly (p<0.001) higher among patients with glomerular hematuria than non-glomerular hematuria. At the cutoff value of 0.57 mg/mg, uAPR showed sensitivity of 81.8% and specificity of 95.5%. At the cutoff value of 22.5%, %dRBC showed sensitivity of 54.5% and specificity of 86.4%.
Conclusion: uAPR has higher sensitivity and specificity than %dRBC in differentiating glomerular from nonglomerular hematuria and can be used as a diagnostic tool.
BIRDEM Med J 2022; 12(1): 51-56