Classification models based on the level of metals in hair and nails of laryngeal cancer patients: diagnosis support or rather speculation?

Metallomics ◽  
2015 ◽  
Vol 7 (3) ◽  
pp. 455-465 ◽  
Author(s):  
Magdalena Golasik ◽  
Wojciech Jawień ◽  
Agnieszka Przybyłowicz ◽  
Witold Szyfter ◽  
Małgorzata Herman ◽  
...  

Several larynx cancer prediction models were built and each was weighted according to their performance.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zhao Ding ◽  
Deshun Yu ◽  
Hefeng Li ◽  
Yueming Ding

AbstractMarital status has long been recognized as an important prognostic factor for many cancers, however its’ prognostic effect for patients with laryngeal cancer has not been fully examined. We retrospectively analyzed 8834 laryngeal cancer patients in the Surveillance Epidemiology and End Results database from 2004 to 2010. Patients were divided into four groups: married, widowed, single, and divorced/separated. The difference in overall survival (OS) and cancer-specific survival (CSS) of the various marital subgroups were calculated using the Kaplan–Meier curve. Multivariate Cox regression analysis screened for independent prognostic factors. Propensity score matching (PSM) was also conducted to minimize selection bias. We included 8834 eligible patients (4817 married, 894 widowed, 1732 single and 1391 divorced/separated) with laryngeal cancer. The 5-year OS and CSS of married, widowed, single, and separated/divorced patients were examined. Univariate and multivariate analyses found marital status to be an independent predictor of survival. Subgroup survival analysis showed that the OS and CSS rates in widowed patients were always the lowest in the various American Joint Committee on Cancer stages, irrespective of sex. Widowed patients demonstrated worse OS and CSS in the 1:1 matched group analysis. Among patients with laryngeal cancer, widowed patients represented the highest-risk group, with the lowest OS and CSS.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sanna Iivanainen ◽  
Jussi Ekstrom ◽  
Henri Virtanen ◽  
Vesa V. Kataja ◽  
Jussi P. Koivunen

Abstract Background Immune-checkpoint inhibitors (ICIs) have introduced novel immune-related adverse events (irAEs), arising from various organ systems without strong timely dependency on therapy dosing. Early detection of irAEs could result in improved toxicity profile and quality of life. Symptom data collected by electronic (e) patient-reported outcomes (PRO) could be used as an input for machine learning (ML) based prediction models for the early detection of irAEs. Methods The utilized dataset consisted of two data sources. The first dataset consisted of 820 completed symptom questionnaires from 34 ICI treated advanced cancer patients, including 18 monitored symptoms collected using the Kaiku Health digital platform. The second dataset included prospectively collected irAE data, Common Terminology Criteria for Adverse Events (CTCAE) class, and the severity of 26 irAEs. The ML models were built using extreme gradient boosting algorithms. The first model was trained to detect the presence and the second the onset of irAEs. Results The model trained to predict the presence of irAEs had an excellent performance based on four metrics: accuracy score 0.97, Area Under the Curve (AUC) value 0.99, F1-score 0.94 and Matthew’s correlation coefficient (MCC) 0.92. The prediction of the irAE onset was more difficult with accuracy score 0.96, AUC value 0.93, F1-score 0.66 and MCC 0.64 but the model performance was still at a good level. Conclusion The current study suggests that ML based prediction models, using ePRO data as an input, can predict the presence and onset of irAEs with a high accuracy, indicating that ePRO follow-up with ML algorithms could facilitate the detection of irAEs in ICI-treated cancer patients. The results should be validated with a larger dataset. Trial registration Clinical Trials Register (NCT3928938), registration date the 26th of April, 2019


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sanghee Lee ◽  
Yoon Jung Chang ◽  
Hyunsoon Cho

Abstract Background Cancer patients’ prognoses are complicated by comorbidities. Prognostic prediction models with inappropriate comorbidity adjustments yield biased survival estimates. However, an appropriate claims-based comorbidity risk assessment method remains unclear. This study aimed to compare methods used to capture comorbidities from claims data and predict non-cancer mortality risks among cancer patients. Methods Data were obtained from the National Health Insurance Service-National Sample Cohort database in Korea; 2979 cancer patients diagnosed in 2006 were considered. Claims-based Charlson Comorbidity Index was evaluated according to the various assessment methods: different periods in washout window, lookback, and claim types. The prevalence of comorbidities and associated non-cancer mortality risks were compared. The Cox proportional hazards models considering left-truncation were used to estimate the non-cancer mortality risks. Results The prevalence of peptic ulcer, the most common comorbidity, ranged from 1.5 to 31.0%, and the proportion of patients with ≥1 comorbidity ranged from 4.5 to 58.4%, depending on the assessment methods. Outpatient claims captured 96.9% of patients with chronic obstructive pulmonary disease; however, they captured only 65.2% of patients with myocardial infarction. The different assessment methods affected non-cancer mortality risks; for example, the hazard ratios for patients with moderate comorbidity (CCI 3–4) varied from 1.0 (95% CI: 0.6–1.6) to 5.0 (95% CI: 2.7–9.3). Inpatient claims resulted in relatively higher estimates reflective of disease severity. Conclusions The prevalence of comorbidities and associated non-cancer mortality risks varied considerably by the assessment methods. Researchers should understand the complexity of comorbidity assessments in claims-based risk assessment and select an optimal approach.


BMC Cancer ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Zhihao Lv ◽  
Yuqi Liang ◽  
Huaxi Liu ◽  
Delong Mo

Abstract Background It remains controversial whether patients with Stage II colon cancer would benefit from chemotherapy after radical surgery. This study aims to assess the real effectiveness of chemotherapy in patients with stage II colon cancer undergoing radical surgery and to construct survival prediction models to predict the survival benefits of chemotherapy. Methods Data for stage II colon cancer patients with radical surgery were retrieved from the Surveillance, Epidemiology, and End Results (SEER) database. Propensity score matching (1:1) was performed according to receive or not receive chemotherapy. Competitive risk regression models were used to assess colon cancer cause-specific death (CSD) and non-colon cancer cause-specific death (NCSD). Survival prediction nomograms were constructed to predict overall survival (OS) and colon cancer cause-specific survival (CSS). The predictive abilities of the constructed models were evaluated by the concordance indexes (C-indexes) and calibration curves. Results A total of 25,110 patients were identified, 21.7% received chemotherapy, and 78.3% were without chemotherapy. A total of 10,916 patients were extracted after propensity score matching. The estimated 3-year overall survival rates of chemotherapy were 0.7% higher than non- chemotherapy. The estimated 5-year and 10-year overall survival rates of non-chemotherapy were 1.3 and 2.1% higher than chemotherapy, respectively. Survival prediction models showed good discrimination (the C-indexes between 0.582 and 0.757) and excellent calibration. Conclusions Chemotherapy improves the short-term (43 months) survival benefit of stage II colon cancer patients who received radical surgery. Survival prediction models can be used to predict OS and CSS of patients receiving chemotherapy as well as OS and CSS of patients not receiving chemotherapy and to make individualized treatment recommendations for stage II colon cancer patients who received radical surgery.


2010 ◽  
Vol 2010 ◽  
pp. 1-4 ◽  
Author(s):  
Çiğdem Tepe Karaca ◽  
Erdoğan Gültekin ◽  
M. Kürşat Yelken ◽  
Ayşenur Akyıldız İğdem ◽  
Mehmet Külekçi

Objective. To determine the long-term histopathologic changes in nasal mucosa and the relationship between progression of the histopathologic changes and the duration without air current stimulation.Material and Method. Biopsies were taken from the inferior turbinates of 11 laryngeal cancer patients after total laryngectomy. Specimens were stained with hematoksilen-eosin and several histopathologic parameters were examined under light microscopy.Results. All of the patients demonstrated at least one histopathologic abnormality (100%,n=11). Goblet destruction and stromal fibrosis were the most common findings (81%,n=9), followed by focal epithelial atrophy and subepithelial seromusinous gland destruction (45%,n=5), neovascularization and congestion (36%,n=4), complete epithelial atrophy and mixoid degeneration (27%,n=3). According to the duration between laryngectomy and biopsy, patients were grouped in to three: group 1; less than 12 months (36%,n=4), group 2; 12–36 months (18%,n=2), and group 3; more than 36 months (45%,n=5). Only congestion was found to be decreased as the duration increased (P<.005).Conclusion. In laryngeal cancer patients histopathologic changes occur in nasal mucosa eventuate due to the cessation of air current stimulation, however there was no relation between progression of the histopathologic findings and the duration of cessation.


2015 ◽  
Vol 36 (6) ◽  
pp. 755-762 ◽  
Author(s):  
Nadim Khoueir ◽  
Nayla Matar ◽  
Chadi Farah ◽  
Evana Francis ◽  
Bassam Tabchy ◽  
...  

2015 ◽  
Vol 54 (06) ◽  
pp. 560-567 ◽  
Author(s):  
K. Zhu ◽  
Z. Lou ◽  
J. Zhou ◽  
N. Ballester ◽  
P. Parikh ◽  
...  

SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Big Data and Analytics in Healthcare”.Background: Hospital readmissions raise healthcare costs and cause significant distress to providers and patients. It is, therefore, of great interest to healthcare organizations to predict what patients are at risk to be readmitted to their hospitals. However, current logistic regression based risk prediction models have limited prediction power when applied to hospital administrative data. Meanwhile, although decision trees and random forests have been applied, they tend to be too complex to understand among the hospital practitioners.Objectives: Explore the use of conditional logistic regression to increase the prediction accuracy.Methods: We analyzed an HCUP statewide in-patient discharge record dataset, which includes patient demographics, clinical and care utilization data from California. We extracted records of heart failure Medicare beneficiaries who had inpatient experience during an 11-month period. We corrected the data imbalance issue with under-sampling. In our study, we first applied standard logistic regression and decision tree to obtain influential variables and derive practically meaning decision rules. We then stratified the original data set accordingly and applied logistic regression on each data stratum. We further explored the effect of interacting variables in the logistic regression modeling. We conducted cross validation to assess the overall prediction performance of conditional logistic regression (CLR) and compared it with standard classification models.Results: The developed CLR models outperformed several standard classification models (e.g., straightforward logistic regression, stepwise logistic regression, random forest, support vector machine). For example, the best CLR model improved the classification accuracy by nearly 20% over the straightforward logistic regression model. Furthermore, the developed CLR models tend to achieve better sensitivity of more than 10% over the standard classification models, which can be translated to correct labeling of additional 400 – 500 readmissions for heart failure patients in the state of California over a year. Lastly, several key predictor identified from the HCUP data include the disposition location from discharge, the number of chronic conditions, and the number of acute procedures.Conclusions: It would be beneficial to apply simple decision rules obtained from the decision tree in an ad-hoc manner to guide the cohort stratification. It could be potentially beneficial to explore the effect of pairwise interactions between influential predictors when building the logistic regression models for different data strata. Judicious use of the ad-hoc CLR models developed offers insights into future development of prediction models for hospital readmissions, which can lead to better intuition in identifying high-risk patients and developing effective post-discharge care strategies. Lastly, this paper is expected to raise the awareness of collecting data on additional markers and developing necessary database infrastructure for larger-scale exploratory studies on readmission risk prediction.


Sign in / Sign up

Export Citation Format

Share Document