Validation of the NCI colorectal cancer risk assessment tool in the CSP 380 veterans cohort.

2017 ◽  
Vol 35 (15_suppl) ◽  
pp. e15135-e15135
Author(s):  
Laura W. Musselwhite ◽  
Thomas S. Redding ◽  
Kellie J. Sims ◽  
Meghan O'Leary ◽  
Elizabeth R. Hauser ◽  
...  

e15135 Background: Refining screening to colorectal cancer (CRC) risk may promote screening effectiveness. We applied the National Cancer Institute (NCI) CRC Risk Assessment Tool to estimate 5- and 10-year CRC risk in an average-risk Veterans cohort undergoing screening colonoscopy with follow-up. Methods: This was a prospective evaluation of predicted to actual risk of CRC using the NCI CRC Risk Assessment Tool in male Veterans undergoing screening colonoscopy with a median follow-up of 10 years.Family, medical, dietary and physical activity histories were collected at enrollment and used to calculate absolute 5- and 10-year CRC risk, and to compare tertiles of expected to observed CRC risk. Sensitivity analyses were performed. Results: For 2,934 male Veterans with complete data (average age 62.4 years, 15% minorities), 1.3% (N=30) and 1.7% (N=50) were diagnosed with CRC within 5 and 10 years of survey completion. The area under the curve for predicting CRC was 0.69 (95% CI; 0.61-0.78) at 5 years and 0.67 (95% CI, 0.59-0.75) at 10 years. We calculated the sensitivity (0.60, 95% CI; 0.45-0.73), specificity (0.67, 95% CI; 0.65-0.69) positive predictive value (0.031, 95% CI; 0.02-0.04) and negative predictive value (0.99, 95% CI; 0.98-0.99). Conclusions: The NCI CRC Risk Assessment Tool was well-calibrated at 5 years and overestimated CRC risk at 10 years, had modest discriminatory function, and a high NPV in a cohort of ethnically diverse male Veterans. This tool reliably excludes 10-year CRC in low-scoring individuals and may inform patient-provider decision making when the benefit of screening is uncertain. [Table: see text]

2019 ◽  
Vol 37 (4_suppl) ◽  
pp. 521-521
Author(s):  
Laura W. Musselwhite ◽  
Thomas S. Redding ◽  
Elizabeth R. Hauser ◽  
David A. Lieberman ◽  
Dawn T. Provenzale

521 Background: Tailoring screening strategy to colorectal cancer (CRC) risk may improve efficiency for all stakeholders. We applied the National Cancer Institute (NCI) CRC Risk Assessment Tool, which calculates 5-10-year, and 20-year absolute risk of colorectal cancer to determine whether it could be used to predict baseline risk of colorectal cancer precursors in a Veterans cohort undergoing first screening colonoscopy. Methods: This was a prospective evaluation of whether the NCI CRC Risk Assessment Tool which offers an absolute risk over time, could be used to estimate baseline cancerous precursors (advanced neoplasia) in Veterans undergoing first screening colonoscopy. Family, medical, dietary and physical activity histories were collected at the time of screening colonoscopy and used to calculate absolute 5, 10, and 20-year CRC risk, and to compare estimated CRC risk to observed AN. Sensitivity analyses were performed. Results: Of 3,121 Veterans undergoing screening colonoscopy, 94% had complete data available to calculate risk (N = 2,934, median age 63 years, 100% men, and 15% minorities). 11% (N = 313) were diagnosed with AN on baseline screening colonoscopy. The area under the curve for predicting AN was 0.60 (95% CI; 0.57-0.63, p < 0.0001) at 5 years, 0.60 (95% CI, 0.57-0.63, p < 0.0001) at 10 years and 0.58 (95% CI, 0.54-0.61, p < 0.0001) at 20 years. At 5 years, we calculated the sensitivity (0.18, 95% CI; 0.14-0.22), specificity (0.91, 95% CI; 0.90-0.92) positive predictive value (0.19, 95% CI; 0.15-0.24) and negative predictive value (0.90, 95% CI; 0.89-0.91) considering the top 10th percentile of risk tool scores as a positive result. Conclusions: The NCI CRC Risk Assessment Tool had modest discriminatory function for predicting AN risk at 5, 10 and 20 years. The Tool’s specificity and negative predictive value were quite good, highlighting its usefulness in risk prediction. This tool may beused to inform the benefit-risk assessment of screening colonoscopy for patients with competing comorbidities.


Author(s):  
Thomas F Imperiale ◽  
Menggang Yu ◽  
Patrick O Monahan ◽  
Timothy E Stump ◽  
Rebeka Tabbey ◽  
...  

Background: There is no validated, discriminating, and easy-to-apply tool for estimating risk of colorectal neoplasia. We studied whether the National Cancer Institute’s (NCI’s) Colorectal Cancer (CRC) Risk Assessment Tool, which estimates future CRC risk, could estimate current risk for advanced colorectal neoplasia among average-risk persons. Methods: This cross-sectional study involved individuals age 50 to 80 years undergoing first-time screening colonoscopy. We measured medical and family history, lifestyle information, and physical measures and calculated each person’s future CRC risk using the NCI tool’s logistic regression equation. We related quintiles of future CRC risk to the current risk of advanced neoplasia (sessile serrated polyp or tubular adenoma ≥ 1 cm, a polyp with villous histology or high-grade dysplasia, or CRC). All statistical tests were two-sided. Results: For 4457 (98.5%) with complete data (mean age = 57.2 years, SD = 6.6 years, 51.7% women), advanced neoplasia prevalence was 8.26%. Based on quintiles of five-year estimated absolute CRC risk, current risks of advanced neoplasia were 2.1% (95% confidence interval [CI] = 1.3% to 3.3%), 4.8% (95% CI = 3.5% to 6.4%), 6.4% (95% CI = 4.9% to 8.2%), 10.0% (95% CI = 8.1% to 12.1%), and 17.6% (95% CI = 15.5% to 20.6%; P &lt; .001). For quintiles of estimated 10-year CRC risk, corresponding current risks for advanced neoplasia were 2.2% (95% CI = 1.4% to 3.5%), 4.8% (95% CI = 3.5% to 6.4%), 6.5% (95% CI = 5.0% to 8.3%), 9.3% (95% CI = 7.5% to 11.4%), and 18.4% (95% CI = 15.9% to 21.1%; P &lt; .001). Among persons with an estimated five-year CRC risk above the median, current risk for advanced neoplasia was 12.8%, compared with 3.7% among those below the median (relative risk = 3.4, 95 CI = 2.7 to 4.4). Conclusions: The NCI’s Risk Assessment Tool, which estimates future CRC risk, may be used to estimate current risk for advanced neoplasia, making it potentially useful for tailoring and improving CRC screening efficiency among average-risk persons.


Author(s):  
James B O'Keefe ◽  
Elizabeth J Tong ◽  
Thomas H Taylor ◽  
Ghazala D Datoo O'Keefe ◽  
David C Tong

Objective: To determine whether a risk prediction tool developed and implemented in March 2020 accurately predicts subsequent hospitalizations. Design: Retrospective cohort study, enrollment from March 24 to May 26, 2020 with follow-up calls until hospitalization or clinical improvement (final calls until June 19, 2020) Setting: Single center telemedicine program managing outpatients from a large medical system in Atlanta, Georgia Participants: 496 patients with laboratory-confirmed COVID-19 in isolation at home. Exclusion criteria included: (1) hospitalization prior to telemedicine program enrollment, (2) immediate discharge with no follow-up calls due to resolution. Exposure: Acute COVID-19 illness Main Outcome and Measures: Hospitalization was the outcome. Days to hospitalization was the metric. Survival analysis using Cox regression was used to determine factors associated with hospitalization. Results: The risk-assessment rubric assigned 496 outpatients to risk tiers as follows: Tier 1, 237 (47.8%); Tier 2, 185 (37.3%); Tier 3, 74 (14.9%). Subsequent hospitalizations numbered 3 (1%), 15 (7%), and 17 (23%) and for Tiers 1-3, respectively. From a Cox regression model with age ≥ 60, gender, and self-reported obesity as covariates, the adjusted hazard ratios using Tier 1 as reference were: Tier 2 HR=3.74 (95% CI, 1.06-13.27; P=0.041); Tier 3 HR=10.87 (95% CI, 3.09-38.27; P<0.001). Tier was the strongest predictor of time to hospitalization. Conclusions and Relevance: A telemedicine risk assessment tool prospectively applied to an outpatient population with COVID-19 identified both low-risk and high-risk patients with better performance than individual risk factors alone. This approach may be appropriate for optimum allocation of resources.


2020 ◽  
Author(s):  
Samaneh Asgari ◽  
Fatemeh Moosaie ◽  
Davood Khalili ◽  
Fereidoun Azizi ◽  
Farzad Hadaegh

Abstract Background: High burden of chronic cardio-metabolic disease (CCD) including type 2 diabetes mellitus (T2DM), chronic kidney disease (CKD), and cardiovascular disease (CVD) have been reported in the Middle East and North Africa region. We aimed to externally validate a Europoid risk assessment tool designed by Alssema et al, including non-laboratory measures, for the prediction of the CCD in the Iranian population. Methods: The predictors included age, body mass index, waist circumference, use of antihypertensive, current smoking, and family history of cardiovascular disease and or diabetes. For external validation of the model in the Tehran lipids and glucose study (TLGS), the Area under the curve (AUC) and the Hosmer-Lemeshow (HL) goodness of fit test were performed for discrimination and calibration, respectively. Results: Among 1310 men and 1960 women aged 28-85 years, 29.5% and 47.4% experienced CCD during the 6 and 9-year follow-up, respectively. The model showed acceptable discrimination, with an AUC of 0.72(95% CI: 0.69-0.75) for men and 0.73(95% CI: 0.71-0.76) for women. The calibration of the model was good for both genders (min HL P=0.5). Considering separate outcomes, AUC was highest for CKD (0.76(95% CI: 0.72-0.79)) and lowest for T2DM (0.65(95% CI: 0.61-0.69)), in men. As for women, AUC was highest for CVD (0.82(95% CI: 0.78-0.86)) and lowest for T2DM (0.69(95% CI: 0.66-0.73)). The 9-year follow-up demonstrated almost similar performances compared to the 6-year follow-up. Conclusion: This model showed acceptable discrimination and good calibration for risk prediction of CCD in short and long-term follow-up in the Iranian population.


2020 ◽  
Author(s):  
Samaneh Asgari ◽  
Fatemeh Moosaie ◽  
Davood Khalili ◽  
Fereidoun Azizi ◽  
Farzad Hadaegh

Abstract Background: High burden of chronic cardio-metabolic disorders including type 2 diabetes mellitus (T2DM), chronic kidney disease (CKD), and cardiovascular disease (CVD) have been reported in the Middle East and North Africa region. We aimed to externally validate a non-laboratory risk assessment tool for the prediction of the chronic cardio-metabolic disorders in the Iranian population. Methods: The predictors included age, body mass index, waist circumference, use of antihypertensive, current smoking, and family history of cardiovascular disease and/or diabetes. For external validation of the model in the Tehran lipids and glucose study (TLGS), the Area under the curve (AUC) and the Hosmer-Lemeshow (HL) goodness of fit test were performed for discrimination and calibration, respectively. Results: Among 1310 men and 1960 women aged 28-85 years, 29.5% and 47.4% experienced chronic cardio-metabolic disorders during the 6 and 9-year follow-up, respectively. The model showed acceptable discrimination, with an AUC of 0.72(95% CI: 0.69-0.75) for men and 0.73(95% CI: 0.71-0.76) for women. The calibration of the model was good for both genders (min HL P=0.5). Considering separate outcomes, AUC was highest for CKD (0.76(95% CI: 0.72-0.79)) and lowest for T2DM (0.65(95% CI: 0.61-0.69)), in men. As for women, AUC was highest for CVD (0.82(95% CI: 0.78-0.86)) and lowest for T2DM (0.69(95% CI: 0.66-0.73)). The 9-year follow-up demonstrated almost similar performances compared to the 6-year follow-up. Using Cox regression in place of logistic multivariable analysis, model’s discrimination and calibration were reduced for prediction of chronic cardio-metabolic disorders; the issue which had more effect on the prediction of incident CKD among women. Moreover, adding data of educational levels and marital status did not improve, the discrimination and calibration in the enhanced model.Conclusion: This model showed acceptable discrimination and good calibration for risk prediction of chronic cardio-metabolic disorders in short and long-term follow-up in the Iranian population.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Grace N. Joseph ◽  
Farid Heidarnejad ◽  
Eric A. Sherer

Introduction. Colorectal cancer (CRC), if not detected early, can be costly and detrimental to one’s health. Colonoscopy can identify CRC early as well as prevent the disease. The benefit of screening colonoscopy has been established, but the optimal frequency of follow-up colonoscopy is unknown and may vary based on findings from colonoscopy screening and patient age. Methods. A partially observed Markov process (POMP) was used to simulate the effects of follow-up colonoscopy on the development of CRC. The POMP uses adenoma and CRC growth models to calculate the probability of a patient having colorectal adenomas and CRC. Then, based on mortality, quality of life, and the costs associated with diagnosis, treatment, and surveillance of colorectal cancer, the overall costs and increase in quality-adjusted life years (QALYs) are calculated for follow-up colonoscopy scenarios. Results. At the $100,000/QALY gained threshold, only one follow-up colonoscopy is cost-effective only after screening at age 50 years. The optimal follow-up is 8.5 years, which gives 84.0 QALYs gained/10,000 persons. No follow-up colonoscopy was cost-effective at the $50,000 and $75,000/QALY gained thresholds. The intervals were insensitive to the findings at screening colonoscopy. Conclusion. Follow-up colonoscopy is cost-effective following screening at age 50 years but not if screening occurs later. Following screening at age 50 years, the optimal follow-up interval is close to the currently recommended 10 years for an average risk screening but does not vary by colonoscopy result.


Sign in / Sign up

Export Citation Format

Share Document