scholarly journals A new colorectal cancer risk prediction model incorporating family history, personal and environmental factors

2019 ◽  
Author(s):  
Yingye Zheng ◽  
Xinwei Hua ◽  
Aung K. Win ◽  
Robert J. MacInnis ◽  
Steven Gallinger ◽  
...  

AbstractPurposeReducing colorectal cancer (CRC) incidence and mortality through early detection would improve efficacy if targeted. A CRC risk-prediction model incorporating personal, family, genetic and environmental risk factors could enhance prediction.MethodsWe developed risk-prediction models using population-based CRC cases (N=4,445) and controls (N=3,967) recruited by the Colon Cancer Family Registry Cohort (CCFRC). A familial risk profile (FRP) was calculated to summarize individuals’ risk based on their CRC family history, family structure, germline mutation probability in major susceptibility genes, and a polygenic component. Using logistic regression, we developed risk models including individuals’ FRP or a binary CRC family-history (FH), and risk factors collected at recruitment. Model validation used follow-up data for population-(N=12,052) and clinic-based (N=5,584) relatives with no cancer history at recruitment, assessing calibration (E/O) and discrimination (AUC).ResultsThe E/O (95% confidence interval [CI]) for FRP models for population-based relatives were 1.04 (0.74-1.45) and 0.86 (0.64-1.20) for men and women, and for clinic-based relatives 1.15 (0.87-1.58) and 1.04 (0.76-1.45). The age-adjusted AUC (95% CI) for FRP models in population-based relatives were 0.69 (0.60-0.78) and 0.70 (0.62-0.77), and for clinic-based relatives 0.77 (0.69-0.84) and 0.68 (0.60-0.76). The incremental values of AUC (95% CI) for FRP over FH models for population-based relatives were 0.08 (0.01-0.15) and 0.10 (0.04-0.16), and for clinic-based relatives 0.11 (0.05-0.17) and 0.11 (0.06-0.17).ConclusionThe FRP-based model and FH-based model calibrate well in both settings. The FRP-based model provided better risk-prediction and discrimination than the FH-based model. A detailed family history may be useful for targeted risk-based screening and clinical management.

BMC Cancer ◽  
2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Michele Sassano ◽  
Marco Mariani ◽  
Gianluigi Quaranta ◽  
Roberta Pastorino ◽  
Stefania Boccia

Abstract Background Risk prediction models incorporating single nucleotide polymorphisms (SNPs) could lead to individualized prevention of colorectal cancer (CRC). However, the added value of incorporating SNPs into models with only traditional risk factors is still not clear. Hence, our primary aim was to summarize literature on risk prediction models including genetic variants for CRC, while our secondary aim was to evaluate the improvement of discriminatory accuracy when adding SNPs to a prediction model with only traditional risk factors. Methods We conducted a systematic review on prediction models incorporating multiple SNPs for CRC risk prediction. We tested whether a significant trend in the increase of Area Under Curve (AUC) according to the number of SNPs could be observed, and estimated the correlation between AUC improvement and number of SNPs. We estimated pooled AUC improvement for SNP-enhanced models compared with non-SNP-enhanced models using random effects meta-analysis, and conducted meta-regression to investigate the association of specific factors with AUC improvement. Results We included 33 studies, 78.79% using genetic risk scores to combine genetic data. We found no significant trend in AUC improvement according to the number of SNPs (p for trend = 0.774), and no correlation between the number of SNPs and AUC improvement (p = 0.695). Pooled AUC improvement was 0.040 (95% CI: 0.035, 0.045), and the number of cases in the study and the AUC of the starting model were inversely associated with AUC improvement obtained when adding SNPs to a prediction model. In addition, models constructed in Asian individuals achieved better AUC improvement with the incorporation of SNPs compared with those developed among individuals of European ancestry. Conclusions Though not conclusive, our results provide insights on factors influencing discriminatory accuracy of SNP-enhanced models. Genetic variants might be useful to inform stratified CRC screening in the future, but further research is needed.


2020 ◽  
Vol 4 (5) ◽  
Author(s):  
Sibel Saya ◽  
Jon D Emery ◽  
James G Dowty ◽  
Jennifer G McIntosh ◽  
Ingrid M Winship ◽  
...  

Abstract Background In many countries, population colorectal cancer (CRC) screening is based on age and family history, though more precise risk prediction could better target screening. We examined the impact of a CRC risk prediction model (incorporating age, sex, lifestyle, genomic, and family history factors) to target screening under several feasible screening scenarios. Methods We estimated the model’s predicted CRC risk distribution in the Australian population. Predicted CRC risks were categorized into screening recommendations under 3 proposed scenarios to compare with current recommendations: 1) highly tailored, 2) 3 risk categories, and 3) 4 sex-specific risk categories. Under each scenario, for 35- to 74-year-olds, we calculated the number of CRC screens by immunochemical fecal occult blood testing (iFOBT) and colonoscopy and the proportion of predicted CRCs over 10 years in each screening group. Results Currently, 1.1% of 35- to 74-year-olds are recommended screening colonoscopy and 56.2% iFOBT, and 5.7% and 83.2% of CRCs over 10 years were predicted to occur in these groups, respectively. For the scenarios, 1) colonoscopy was recommended to 8.1% and iFOBT to 37.5%, with 36.1% and 50.1% of CRCs in each group; 2) colonoscopy was recommended to 2.4% and iFOBT to 56.0%, with 13.2% and 76.9% of cancers in each group; and 3) colonoscopy was recommended to 5.0% and iFOBT to 54.2%, with 24.5% and 66.5% of cancers in each group. Conclusions A highly tailored CRC screening scenario results in many fewer screens but more cancers in those unscreened. Category-based scenarios may provide a good balance between number of screens and cancers detected and are simpler to implement.


2018 ◽  
Vol 36 (7_suppl) ◽  
pp. 120-120
Author(s):  
Mia Hashibe ◽  
Brenna Blackburn ◽  
Jihye Park ◽  
Kerry G. Rowe ◽  
John Snyder ◽  
...  

120 Background: There are an estimated 760,000 endometrial cancer survivors alive in the US today. We previously reported on increased heart disease (HD) risk among endometrial cancer survivors from our population-based cohort study. Although there are many risk prediction models for the risk of endometrial cancer, there are none to our knowledge for endometrial cancer survivors. Methods: We identified 2,994 endometrial cancer patients in the Utah Population Database, which links data from multiple statewide sources. We estimated hazard ratios with the Cox proportional hazards model for predictors of five-, ten- and fifteen-year risks. The Harrell’s C statistic was used to evaluate the model performance. We used 70% of the data randomly selected to develop the model and the rest of the data to validate the model. Results: A total of 1,591 patients were diagnosed with HD. Increased risks of HD among endometrial cancer patients were observed for older age, obesity at baseline, family history of HD, previous disease diagnosis (hypertension, diabetes, high cholesterol, COPD), distant stage, grade, histology, chemotherapy, and radiation therapy. The C-statistics for the risk prediction model were 0.69 for the hypothesized risk factors for HD, 0.56 for clinical factors, and 0.71 when statistically significant risk factors were included. With the final model selected, as one example, the absolute risks of HD were 17.6% at 5-years, 24.0% at 10-years and 32.0% at 15 years for a woman diagnosed with regional stage, grade I endometrial cancer in her fifties, was white, was obese at cancer diagnosis, had a family history of HD but no previous history of HD herself, had hypertension, but no history of diabetes or high cholesterol or COPD, and had radiation therapy treatment but no chemotherapy. The AUCs were 0.79 for the 5-year, 0.78 for the 10-year and 0.78 for the 15-year predictions. Conclusions: We developed the first risk prediction model for HD among endometrial cancer survivors within a population-based cohort study. Risk prediction models for cancer survivors are important in understanding long-term disease risks after cancer treatment is complete. Such models may contribute to management plans for treatment and individualized prevention efforts.


Author(s):  
Julie R. Palmer ◽  
Gary Zirpoli ◽  
Kimberly A. Bertrand ◽  
Tracy Battaglia ◽  
Leslie Bernstein ◽  
...  

PURPOSE Breast cancer risk prediction models are used to identify high-risk women for early detection, targeted interventions, and enrollment into prevention trials. We sought to develop and evaluate a risk prediction model for breast cancer in US Black women, suitable for use in primary care settings. METHODS Breast cancer relative risks and attributable risks were estimated using data from Black women in three US population-based case-control studies (3,468 breast cancer cases; 3,578 controls age 30-69 years) and combined with SEER age- and race-specific incidence rates, with incorporation of competing mortality, to develop an absolute risk model. The model was validated in prospective data among 51,798 participants of the Black Women's Health Study, including 1,515 who developed invasive breast cancer. A second risk prediction model was developed on the basis of estrogen receptor (ER)–specific relative risks and attributable risks. Model performance was assessed by calibration (expected/observed cases) and discriminatory accuracy (C-statistic). RESULTS The expected/observed ratio was 1.01 (95% CI, 0.95 to 1.07). Age-adjusted C-statistics were 0.58 (95% CI, 0.56 to 0.59) overall and 0.63 (95% CI, 0.58 to 0.68) among women younger than 40 years. These measures were almost identical in the model based on estrogen receptor–specific relative risks and attributable risks. CONCLUSION Discriminatory accuracy of the new model was similar to that of the most frequently used questionnaire-based breast cancer risk prediction models in White women, suggesting that effective risk stratification for Black women is now possible. This model may be especially valuable for risk stratification of young Black women, who are below the ages at which breast cancer screening is typically begun.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Xiaona Jia ◽  
Mirza Mansoor Baig ◽  
Farhaan Mirza ◽  
Hamid GholamHosseini

Background and Objective. Current cardiovascular disease (CVD) risk models are typically based on traditional laboratory-based predictors. The objective of this research was to identify key risk factors that affect the CVD risk prediction and to develop a 10-year CVD risk prediction model using the identified risk factors. Methods. A Cox proportional hazard regression method was applied to generate the proposed risk model. We used the dataset from Framingham Original Cohort of 5079 men and women aged 30-62 years, who had no overt symptoms of CVD at the baseline; among the selected cohort 3189 had a CVD event. Results. A 10-year CVD risk model based on multiple risk factors (such as age, sex, body mass index (BMI), hypertension, systolic blood pressure (SBP), cigarettes per day, pulse rate, and diabetes) was developed in which heart rate was identified as one of the novel risk factors. The proposed model achieved a good discrimination and calibration ability with C-index (receiver operating characteristic (ROC)) being 0.71 in the validation dataset. We validated the model via statistical and empirical validation. Conclusion. The proposed CVD risk prediction model is based on standard risk factors, which could help reduce the cost and time required for conducting the clinical/laboratory tests. Healthcare providers, clinicians, and patients can use this tool to see the 10-year risk of CVD for an individual. Heart rate was incorporated as a novel predictor, which extends the predictive ability of the past existing risk equations.


2018 ◽  
Author(s):  
Anabela Correia Martins ◽  
Juliana Moreira ◽  
Catarina Silva ◽  
Joana Silva ◽  
Cláudia Tonelo ◽  
...  

BACKGROUND Falls are a major health problem among older adults. The risk of falling can be increased by polypharmacy, vision impairment, high blood pressure, environmental home hazards, fear of falling, and changes in the function of musculoskeletal and sensory systems that are associated with aging. Moreover, individuals who experienced previous falls are at higher risk. Nevertheless, falls can be prevented by screening for known risk factors. OBJECTIVE The objective of our study was to develop a multifactorial, instrumented, screening tool for fall risk, according to the key risk factors for falls, among Portuguese community-dwelling adults aged 50 years or over and to prospectively validate a risk prediction model for the risk of falling. METHODS This prospective study, following a convenience sample method, will recruit community-dwelling adults aged 50 years or over, who stand and walk independently with or without walking aids in parish councils, physical therapy clinics, senior’s universities, and other facilities in different regions of continental Portugal. The FallSensing screening tool is a technological solution for fall risk screening that includes software, a pressure platform, and 2 inertial sensors. The screening includes questions about demographic and anthropometric data, health and lifestyle behaviors, a detailed explanation about procedures to accomplish 6 functional tests (grip strength, Timed Up and Go, 30 seconds sit to stand, step test, 4-Stage Balance test “modified,” and 10-meter walking speed), 3 questionnaires concerning environmental home hazards, and an activity and participation profile related to mobility and self-efficacy for exercise. RESULTS The enrollment began in June 2016 and we anticipate study completion by the end of 2018. CONCLUSIONS The FallSensing screening tool is a multifactorial and evidence-based assessment which identifies factors that contribute to fall risk. Establishing a risk prediction model will allow preventive strategies to be implemented, potentially decreasing fall rate. REGISTERED REPORT IDENTIFIER RR1-10.2196/10304


Author(s):  
Masaru Samura ◽  
Naoki Hirose ◽  
Takenori Kurata ◽  
Keisuke Takada ◽  
Fumio Nagumo ◽  
...  

Abstract Background In this study, we investigated the risk factors for daptomycin-associated creatine phosphokinase (CPK) elevation and established a risk score for CPK elevation. Methods Patients who received daptomycin at our hospital were classified into the normal or elevated CPK group based on their peak CPK levels during daptomycin therapy. Univariable and multivariable analyses were performed, and a risk score and prediction model for the incidence probability of CPK elevation were calculated based on logistic regression analysis. Results The normal and elevated CPK groups included 181 and 17 patients, respectively. Logistic regression analysis revealed that concomitant statin use (odds ratio [OR] 4.45, 95% confidence interval [CI] 1.40–14.47, risk score 4), concomitant antihistamine use (OR 5.66, 95% CI 1.58–20.75, risk score 4), and trough concentration (Cmin) between 20 and <30 µg/mL (OR 14.48, 95% CI 2.90–87.13, risk score 5) and ≥30.0 µg/mL (OR 24.64, 95% CI 3.21–204.53, risk score 5) were risk factors for daptomycin-associated CPK elevation. The predicted incidence probabilities of CPK elevation were <10% (low risk), 10%–<25% (moderate risk), and ≥25% (high risk) with the total risk scores of ≤4, 5–6, and ≥8, respectively. The risk prediction model exhibited a good fit (area under the receiving-operating characteristic curve 0.85, 95% CI 0.74–0.95). Conclusions These results suggested that concomitant use of statins with antihistamines and Cmin ≥20 µg/mL were risk factors for daptomycin-associated CPK elevation. Our prediction model might aid in reducing the incidence of daptomycin-associated CPK elevation.


Sign in / Sign up

Export Citation Format

Share Document