scholarly journals LENT score in prognostic assessment of malignant pleural effusion – the impact of target therapy

Author(s):  
Sofia Sousa ◽  
◽  
João Caldeira ◽  
Ana Figueiredo ◽  
Fernando Barata ◽  
...  

Introduction: Malignant pleural effusion (MPE) is a common manifestation in patients with advanced lung cancer. The LENT score was developed as a risk stratification system to predict the survival of these patients. However, following the discovery of molecular markers and a new era of personalized therapy, prognostic estimation became a challenging exercise. The aim of this study was to evaluate the performance of LENT score in predicting MPE survival in EGFR and ALK mutated lung adenocarcinoma. Methods: A retrospective single-center study of patients with MPE from lung adenocarcinoma followed between January 2008 to December 2018. Results: Forty-two patients were included in the study (mean age 76.4 ± 12.6 years, 52% female). Of these patients, 29% exhibited EGFR gene mutation or ALK gene translocation and received tyrosine kinase inhibitor therapy (TKI), in contrast to 71% of patients without identification of mutational factors and receiving conventional chemotherapy. Based on LENT score, in the sub-group treated with conventional chemotherapy, 67% had a moderate-risk category and 33% a high-risk category, with a median overall survival (OS) of 109 (31-406) and 36 (11-77) days, respectively. In the sub-group treated with targeted therapy, 75% were in a moderate-risk category and 25% in a high-risk category with a median OS of 1033 (245-1710) and 238 (27-not available) days, respectively. Patients receiving targeted therapy had a longer survival than patients receiving conventional chemotherapy in all LENT score risk categories (p<0.05). Conclusions: OS in patients with MPE due to lung adenocarcinoma was similar to that predicted by the LENT score, except for patients with EGFR mutation or ALK translocation. In this subgroup, the LENT score seems to underestimate the prognosis. Although this study has limitations regarding sample size, it does reveal, in the present time, some inaccuracy of the LENT score, demonstrating that it needs to be reviewed and revalidated in view of recent therapeutic advances.

Author(s):  
Ming-Fang Wu ◽  
Chih-An Lin ◽  
Tzu-Hang Yuan ◽  
Hsiang-Yuan Yeh ◽  
Sheng-Fang Su ◽  
...  

Abstract Background Malignant pleural effusion (MPE)-macrophage (Mφ) of lung cancer patients within unique M1/M2 spectrum showed plasticity in M1–M2 transition. The M1/M2 features of MPE-Mφ and their significance to patient outcomes need to be clarified; furthermore, whether M1-repolarization could benefit treatment remains unclear. Methods Total 147 stage-IV lung adenocarcinoma patients undergoing MPE drainage were enrolled for profiling and validation of their M1/M2 spectrum. In addition, the MPE-Mφ signature on overall patient survival was analyzed. The impact of the M1-polarization strategy of patient-derived MPE-Mφ on anti-cancer activity was examined. Results We found that MPE-Mφ expressed both traditional M1 (HLA-DRA) and M2 (CD163) markers and showed a wide range of M1/M2 spectrum. Most of the MPE-Mφ displayed diverse PD-L1 expression patterns, while the low PD-L1 expression group was correlated with higher levels of IL-10. Among these markers, we identified a novel two-gene MPE-Mφ signature, IL-1β and TGF-β1, representing the M1/M2 tendency, which showed a strong predictive power in patient outcomes in our MPE-Mφ patient cohort (N = 60, p = 0.013) and The Cancer Genome Atlas Lung Adenocarcinoma dataset (N = 478, p < 0.0001). Significantly, β-glucan worked synergistically with IFN-γ to reverse the risk signature by repolarizing the MPE-Mφ toward the M1 pattern, enhancing anti-cancer activity. Conclusions We identified MPE-Mφ on the M1/M2 spectrum and plasticity and described a two-gene M1/M2 signature that could predict the outcome of late-stage lung cancer patients. In addition, we found that “re-education” of these MPE-Mφ toward anti-cancer M1 macrophages using clinically applicable strategies may overcome tumor immune escape and benefit anti-cancer therapies.


2021 ◽  
pp. 097275312110000
Author(s):  
Kanupriya Sharma ◽  
Priya Battu ◽  
Akshay Anand ◽  
Raghuram Nagarathna ◽  
Navneet Kaur ◽  
...  

Background: Indian Diabetes Risk Score (IDRS) is a screening tool for quantifying the risk of diabetes mellitus (DM) development in the Indian population. The present study has evaluated the level of risk of developing DM in Chandigarh and Panchkula based on the IDRS score. Methods: As a part of a national diabetes control trial funded by the Ministry of Health and Family Welfare (MoHFW) and the Ministry of AYUSH, Government of India, 1,916 participants from the Chandigarh and Panchkula regions were assessed for the risk of developing DM. Risk assessment was done on the basis of the IDRS score which includes age, family history, waist circumference, and physical activity as its contributing factors. Participants with an IDRS score <30 were in the low-risk category, those with 30 to 50 were in the moderate-risk category, and those with >60 were in the high-risk category for DM. Results: Out of the 1,916 screened respondents (59.86% females and 40.14% males), 894 participants (46.65%) were at a high risk for DM (IDRS >60), 764 (39.87%) were at a moderate risk (IDRS = 30–60), and 258 (13.46%) were at a low risk (IDRS <30). Waist circumference contributed to 35.90% of the high-risk category followed by age (19.67%) and physical activity (11.67%). Age and waist circumference also showed a strong correlation with the total IDRS score. Conclusion: The Chandigarh and Panchkula population showed a high tendency to develop DM based on the IDRS score. Modifiable risk factors such as waist circumference and physical activity were the major contributing factors. Apart from the modifiable risk factors, age was also another major contributing risk factor. Based on these outcomes, lifestyle modifications like yoga and exercise can be proposed for this population as a preventive approach to reduce the risk of developing DM and other associated cerebrovascular complications.


Blood ◽  
2014 ◽  
Vol 124 (21) ◽  
pp. 4545-4545
Author(s):  
Massimo Breccia ◽  
Matteo Molica ◽  
Irene Zacheo ◽  
Giuliana Alimena

Abstract Nilotinib is currently approved for the treatment of chronic myeloid leukemia (CML) in chronic (CP) and accelerated phase (AP) after failure of imatinib and in newly diagnosed patients. Atherosclerotic events were retrospectively reported in patients with baseline cardiovascular risk factors during nilotinib treatment. We estimated the risk of developing atherosclerotic events in patients treated with second or first line nilotinib, with a median follow-up of 48 months, by retrospectively applying the SCORE chart proposed by the European Society of Cardiology (ESC) and evaluating risk factors at baseline (diabetes, obesity, smoking and hypertension). Overall, we enrolled in the study 82 CP patients treated frontline (42 patients, at the dose of 300 mg BID) or after failure of other tyrosine kinase inhibitors (40 patients, treated with 400 mg BID). The SCORE chart is based on the stratification of sex (male vs female), age (from 40 to 65 years), smoker vs non-smoker, systolic pressure (from 120 to 180 mm Hg) and cholesterol (measured in mmol/l, from 150 to 300 mg/dl). For statistical purposes we considered patients subdivided in low, moderate, high and very high risk. There were 48 males and 34 females, median age 51 years (range 22-84). According to WHO classification, 42 patients were classified as normal weight (BMI < 25), 26 patients were overweight (BMI 26- <30) and 14 were obese (BMI > 30). Retrospective classification according to the SCORE chart revealed that 27 patients (33%) were in the low risk category, 30 patients (36%) in the moderate risk category and 24 patients (29%) in the high risk category. As regards risk factors, we revealed that 17 patients (20.7%) had a concomitant type II controlled diabetes (without organ damage), 23 patients (28%) were smokers, 29 patients (35%) were receiving concomitant drugs for hypertension, 15 patients (18%) had concomitant dyslipidaemia. Overall, the cumulative incidence of atherosclerotic events at 48 months was 8.5% (95% CI: 4.55-14.07): none of the low-risk patients according to the SCORE chart experienced atherosclerotic events compared to 10% in the moderate risk and 29% in the high risk category (p=0.002). Atherosclerotic-free survival was 100%, 89% and 69% in the low, moderate and high-risk population, respectively (p=0.001). SCORE chart evaluation at disease baseline could be a valid tool to identify patients at high risk of atherosclerotic events during nilotinib treatment. Disclosures Breccia: novartis: Consultancy; BMS: Consultancy; Celgene: Consultancy.


10.2196/16069 ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. e16069
Author(s):  
Kenneth B Chapman ◽  
Martijn M Pas ◽  
Diana Abrar ◽  
Wesley Day ◽  
Kris C Vissers ◽  
...  

Background Several pain management guidelines recommend regular urine drug testing (UDT) in patients who are being treated with chronic opioid analgesic therapy (COAT) to monitor compliance and improve safety. Guidelines also recommend more frequent testing in patients who are at high risk of adverse events related to COAT; however, there is no consensus on how to identify high-risk patients or on the testing frequency that should be used. Using previously described clinical risk factors for UDT results that are inconsistent with the prescribed COAT, we developed a web-based tool to adjust drug testing frequency in patients treated with COAT. Objective The objective of this study was to evaluate a risk stratification tool, the UDT Randomizer, to adjust UDT frequency in patients treated with COAT. Methods Patients were stratified using an algorithm based on readily available clinical risk factors into categories of presumed low, moderate, high, and high+ risk of presenting with UDT results inconsistent with the prescribed COAT. The algorithm was integrated in a website to facilitate adoption across practice sites. To test the performance of this algorithm, we performed a retrospective analysis of patients treated with COAT between June 2016 and June 2017. The primary outcome was compliance with the prescribed COAT as defined by UDT results consistent with the prescribed COAT. Results 979 drug tests (867 UDT, 88.6%; 112 oral fluid testing, 11.4%) were performed in 320 patients. An inconsistent drug test result was registered in 76/979 tests (7.8%). The incidences of inconsistent test results across the risk tool categories were 7/160 (4.4%) in the low risk category, 32/349 (9.2%) in the moderate risk category, 28/338 (8.3%) in the high risk category, and 9/132 (6.8%) in the high+ risk category. Generalized estimating equation analysis demonstrated that the moderate risk (odds ratio (OR) 2.1, 95% CI 0.9-5.0; P=.10), high risk (OR 2.0, 95% CI 0.8-5.0; P=.14), and high risk+ (OR 2.0, 95% CI 0.7-5.6; P=.20) categories were associated with a nonsignificantly increased risk of inconsistency vs the low risk category. Conclusions The developed tool stratified patients during individual visits into risk categories of presenting with drug testing results inconsistent with the prescribed COAT; the higher risk categories showed nonsignificantly higher risk compared to the low risk category. Further development of the tool with additional risk factors in a larger cohort may further clarify and enhance its performance.


2019 ◽  
Author(s):  
Kenneth B Chapman ◽  
Martijn M Pas ◽  
Diana Abrar ◽  
Wesley Day ◽  
Kris C Vissers ◽  
...  

BACKGROUND Several pain management guidelines recommend regular urine drug testing (UDT) in patients who are being treated with chronic opioid analgesic therapy (COAT) to monitor compliance and improve safety. Guidelines also recommend more frequent testing in patients who are at high risk of adverse events related to COAT; however, there is no consensus on how to identify high-risk patients or on the testing frequency that should be used. Using previously described clinical risk factors for UDT results that are inconsistent with the prescribed COAT, we developed a web-based tool to adjust drug testing frequency in patients treated with COAT. OBJECTIVE The objective of this study was to evaluate a risk stratification tool, the UDT Randomizer, to adjust UDT frequency in patients treated with COAT. METHODS Patients were stratified using an algorithm based on readily available clinical risk factors into categories of presumed low, moderate, high, and high+ risk of presenting with UDT results inconsistent with the prescribed COAT. The algorithm was integrated in a website to facilitate adoption across practice sites. To test the performance of this algorithm, we performed a retrospective analysis of patients treated with COAT between June 2016 and June 2017. The primary outcome was compliance with the prescribed COAT as defined by UDT results consistent with the prescribed COAT. RESULTS 979 drug tests (867 UDT, 88.6%; 112 oral fluid testing, 11.4%) were performed in 320 patients. An inconsistent drug test result was registered in 76/979 tests (7.8%). The incidences of inconsistent test results across the risk tool categories were 7/160 (4.4%) in the low risk category, 32/349 (9.2%) in the moderate risk category, 28/338 (8.3%) in the high risk category, and 9/132 (6.8%) in the high+ risk category. Generalized estimating equation analysis demonstrated that the moderate risk (odds ratio (OR) 2.1, 95% CI 0.9-5.0; <i>P</i>=.10), high risk (OR 2.0, 95% CI 0.8-5.0; <i>P</i>=.14), and high risk+ (OR 2.0, 95% CI 0.7-5.6; <i>P</i>=.20) categories were associated with a nonsignificantly increased risk of inconsistency vs the low risk category. CONCLUSIONS The developed tool stratified patients during individual visits into risk categories of presenting with drug testing results inconsistent with the prescribed COAT; the higher risk categories showed nonsignificantly higher risk compared to the low risk category. Further development of the tool with additional risk factors in a larger cohort may further clarify and enhance its performance.


Oncogene ◽  
2006 ◽  
Vol 25 (31) ◽  
pp. 4300-4309 ◽  
Author(s):  
H-H Yeh ◽  
W-W Lai ◽  
H H W Chen ◽  
H-S Liu ◽  
W-C Su

CHEST Journal ◽  
2018 ◽  
Vol 154 (4) ◽  
pp. 623A
Author(s):  
MARIANA CAMPELLO DE OLIVEIRA ◽  
FERNANDO ABRAO ◽  
GEISA VIANA ◽  
IGOR RENATO LOURO BRUNO DE ABREU ◽  
ANTONIO FLÁVIO BINA BIAZZOTTO

2017 ◽  
Vol 12 (11) ◽  
pp. S2161-S2162
Author(s):  
F. Abrão ◽  
M. Oliveira ◽  
G. Viana ◽  
I. Abreu ◽  
R. Younes ◽  
...  

BMJ Open ◽  
2019 ◽  
Vol 9 (8) ◽  
pp. e030922 ◽  
Author(s):  
Narani Sivayoham ◽  
Lesley A Blake ◽  
Shafi E Tharimoopantavida ◽  
Saad Chughtai ◽  
Adil N Hussain ◽  
...  

ObjectiveTo derive and validate a new clinical prediction rule to risk-stratify emergency department (ED) patients admitted with suspected sepsis.DesignRetrospective prognostic study of prospectively collected data.SettingED.ParticipantsPatients aged ≥18 years who met two Systemic Inflammatory Response Syndrome criteria or one Red Flag sepsis criteria on arrival, received intravenous antibiotics for a suspected infection and admitted.Primary outcome measureIn-hospital all-cause mortality.MethodThe data were divided into derivation and validation cohorts. The simplified-Mortality in Severe Sepsis in the ED score and quick-SOFA scores, refractory hypotension and lactate were collectively termed ‘component scores’ and cumulatively termed the ‘Risk-stratification of ED suspected Sepsis (REDS) score’. Each patient in the derivation cohort received a score (0–3) for each component score. The REDS score ranged from 0 to 12. The component scores were subject to univariate and multivariate logistic regression analyses. The receiver operator characteristic (ROC) curves for the REDS and the components scores were constructed and their cut-off points identified. Scores above the cut-off points were deemed high-risk. The area under the ROC (AUROC) curves and sensitivity for mortality of the high-risk category of the REDS score and component scores were compared. The REDS score was internally validated.Results2115 patients of whom 282 (13.3%) died in hospital. Derivation cohort: 1078 patients with 140 deaths (13%). The AUROC curve with 95% CI, cut-off point and sensitivity for mortality (95% CI) of the high-risk category of the REDS score were: derivation: 0.78 (0.75 to 0.80); ≥3; 85.0 (78 to 90.5). Validation: 0.74 (0.71 to 0.76); ≥3; 84.5 (77.5 to 90.0). The AUROC curve and the sensitivity for mortality of the REDS score was better than that of the component scores. Specificity and mortality rates for REDS scores of ≥3, ≥5 and ≥7 were 54.8%, 88.8% and 96.9% and 21.8%, 36.0% and 49.1%, respectively.ConclusionThe REDS score is a simple and objective score to risk-stratify ED patients with suspected sepsis.


Author(s):  
Nazia N. Shaik ◽  
Swapna M. Jaswanth ◽  
Shashikala Manjunatha

Background: Diabetes is one of the largest global health emergencies of the 21st century. As per International Federation of Diabetes some 425 million people worldwide are estimated to have diabetes. The prevalence is higher in urban versus rural (10.2% vs 6.9%). India had 72.9 million people living with diabetes of which, 57.9% remained undiagnosed as per the 2017 data. The objectives of the present study were to identify subjects who at risk of developing Diabetes by using Indian diabetes risk score (IDRS) in the Urban field practice area of Rajarajeswari Medical College and Hospital (RRMCH).Methods: A cross sectional study was conducted using a Standard questionnaire of IDRS on 150 individuals aged ≥20 years residing in the Urban field practice area of RRMCH. The subjects with score <30, 30-50, >or =60 were categorized as having low risk, moderate risk and high risk for developing diabetes type-2 respectively.Results: Out of total 150 participants, 36 (24%) were in high-risk category (IDRS≥60), the majority of participants 61 (41%) were in the moderate-risk category (IDRS 30–50) and 53 (35%) participants were found to be at low-risk (<30) for diabetes. Statistical significant asssociation was found between IDRS and gender, literacy status, body mass index (p<0.0000l).Conclusions: It is essential to implement IDRS which is a simple tool for identifying subjects who are at risk for developing diabetes so that proper intervention can be carried out at the earliest to reduce the burden of diabetes.


Sign in / Sign up

Export Citation Format

Share Document