scholarly journals Heterogeneity in colorectal cancer incidence among people recommended 3-yearly surveillance post-polypectomy: a validation study

Endoscopy ◽  
2020 ◽  
Author(s):  
Emma C. Robbins ◽  
Kate Wooldrage ◽  
Iain Stenson ◽  
Kevin Pack ◽  
Stephen Duffy ◽  
...  

Abstract Background Colonoscopy surveillance is recommended for patients at increased risk of colorectal cancer (CRC) following adenoma removal. Low-, intermediate-, and high-risk groups are defined by baseline adenoma characteristics. We previously examined intermediate-risk patients from hospital data and identified a higher-risk subgroup who benefited from surveillance and a lower-risk subgroup who may not require surveillance. This study explored whether these findings apply in individuals undergoing CRC screening. Methods This retrospective study used data from the UK Flexible Sigmoidoscopy Screening Trial (UKFSST), English CRC screening pilot (ECP), and US Kaiser Permanente CRC prevention program (KPCP). Screening participants (50 – 74 years) classified as intermediate-risk at baseline colonoscopy were included. CRC data were available through 2006 (KPCP) or 2014 (UKFSST, ECP). Lower- and higher-risk subgroups were defined using our previously identified baseline risk factors: higher-risk participants had incomplete colonoscopies, poor bowel preparation, adenomas ≥ 20 mm or with high-grade dysplasia, or proximal polyps. We compared CRC incidence in these subgroups and in the presence vs. absence of surveillance using Cox regression. Results Of 2291 intermediate-risk participants, 45 % were classified as higher risk. Median follow-up was 11.8 years. CRC incidence was higher in the higher-risk than lower-risk subgroup (hazard ratio [HR] 2.08, 95 % confidence interval [CI] 1.07 – 4.06). Surveillance reduced CRC incidence in higher-risk participants (HR 0.35, 95 %CI 0.14 – 0.86) but not statistically significantly so in lower-risk participants (HR 0.41, 95 %CI 0.12 – 1.38). Conclusion As previously demonstrated for hospital patients, screening participants classified as intermediate risk comprised two risk subgroups. Surveillance clearly benefited the higher-risk subgroup.

2017 ◽  
Vol 21 (25) ◽  
pp. 1-536 ◽  
Author(s):  
Wendy Atkin ◽  
Amy Brenner ◽  
Jessica Martin ◽  
Katherine Wooldrage ◽  
Urvi Shah ◽  
...  

BackgroundThe UK guideline recommends 3-yearly surveillance for patients with intermediate-risk (IR) adenomas. No study has examined whether or not this group has heterogeneity in surveillance needs.ObjectivesTo examine the effect of surveillance on colorectal cancer (CRC) incidence; assess heterogeneity in risk; and identify the optimum frequency of surveillance, the psychological impact of surveillance, and the cost-effectiveness of alternative follow-up strategies.DesignRetrospective multicentre cohort study.SettingRoutine endoscopy and pathology data from 17 UK hospitals (n = 11,944), and a screening data set comprising three pooled cohorts (n = 2352), followed up using cancer registries.SubjectsPatients with IR adenoma(s) (three or four small adenomas or one or two large adenomas).Primary outcomesAdvanced adenoma (AA) and CRC detected at follow-up visits, and CRC incidence after baseline and first follow-up.MethodsThe effects of surveillance on long-term CRC incidence and of interval length on findings at follow-up were examined using proportional hazards and logistic regression, adjusting for patient, procedural and polyp characteristics. Lower-intermediate-risk (LIR) subgroups and higher-intermediate-risk (HIR) subgroups were defined, based on predictors of CRC risk. A model-based cost–utility analysis compared 13 surveillance strategies. Between-group analyses of variance were used to test for differences in bowel cancer worry between screening outcome groups (n = 35,700). A limitation of using routine hospital data is the potential for missed examinations and underestimation of the effect of interval and surveillance.ResultsIn the hospital data set, 168 CRCs occurred during 81,442 person-years (pys) of follow-up [206 per 100,000 pys, 95% confidence interval (CI) 177 to 240 pys]. One surveillance significantly lowered CRC incidence, both overall [hazard ratio (HR) 0.51, 95% CI 0.34 to 0.77] and in the HIR subgroup (n = 9265; HR 0.50, 95% CI 0.34 to 0.76). In the LIR subgroup (n = 2679) the benefit of surveillance was less clear (HR 0.62, 95% CI 0.16 to 2.43). Additional surveillance lowered CRC risk in the HIR subgroup by a further 15% (HR 0.36, 95% CI 0.20 to 0.62). The odds of detecting AA and CRC at first follow-up (FUV1) increased by 18% [odds ratio (OR) 1.18, 95% CI 1.12 to 1.24] and 32% (OR 1.32, 95% CI 1.20 to 1.46) per year increase in interval, respectively, and the odds of advanced neoplasia at second follow-up increased by 22% (OR 1.22, 95% CI 1.09 to 1.36), after adjustment. Detection rates of AA and CRC remained below 10% and 1%, respectively, with intervals to 3 years. In the screening data set, 32 CRCs occurred during 25,745 pys of follow-up (124 per 100,000 pys, 95% CI 88 to 176 pys). One follow-up conferred a significant 73% reduction in CRC incidence (HR 0.27, 95% CI 0.10 to 0.71). Owing to the small number of end points in this data set, no other outcome was significant. Although post-screening bowel cancer worry was higher in people who were offered surveillance, worry was due to polyp detection rather than surveillance. The economic evaluation, using data from the hospital data set, suggested that 3-yearly colonoscopic surveillance without an age cut-off would produce the greatest health gain.ConclusionsA single surveillance benefited all IR patients by lowering their CRC risk. We identified a higher-risk subgroup that benefited from further surveillance, and a lower-risk subgroup that may require only one follow-up. A surveillance interval of 3 years seems suitable for most IR patients. These findings should be validated in other studies to confirm whether or not one surveillance visit provides adequate protection for the lower-risk subgroup of intermediate-risk patients.Study registrationCurrent Controlled Trials ISRCTN15213649.FundingThe National Institute for Health Research Health Technology Assessment programme.


Gut ◽  
2020 ◽  
Vol 69 (9) ◽  
pp. 1645-1658 ◽  
Author(s):  
Amanda J Cross ◽  
Emma C Robbins ◽  
Kevin Pack ◽  
Iain Stenson ◽  
Paula L Kirby ◽  
...  

ObjectivePostpolypectomy colonoscopy surveillance aims to prevent colorectal cancer (CRC). The 2002 UK surveillance guidelines define low-risk, intermediate-risk and high-risk groups, recommending different strategies for each. Evidence supporting the guidelines is limited. We examined CRC incidence and effects of surveillance on incidence among each risk group.DesignRetrospective study of 33 011 patients who underwent colonoscopy with adenoma removal at 17 UK hospitals, mostly (87%) from 2000 to 2010. Patients were followed up through 2016. Cox regression with time-varying covariates was used to estimate effects of surveillance on CRC incidence adjusted for patient, procedural and polyp characteristics. Standardised incidence ratios (SIRs) compared incidence with that in the general population.ResultsAfter exclusions, 28 972 patients were available for analysis; 14 401 (50%) were classed as low-risk, 11 852 (41%) as intermediate-risk and 2719 (9%) as high-risk. Median follow-up was 9.3 years. In the low-risk, intermediate-risk and high-risk groups, CRC incidence per 100 000 person-years was 140 (95% CI 122 to 162), 221 (195 to 251) and 366 (295 to 453), respectively. CRC incidence was 40%–50% lower with a single surveillance visit than with none: hazard ratios (HRs) were 0.56 (95% CI 0.39 to 0.80), 0.59 (0.43 to 0.81) and 0.49 (0.29 to 0.82) in the low-risk, intermediate-risk and high-risk groups, respectively. Compared with the general population, CRC incidence without surveillance was similar among low-risk (SIR 0.86, 95% CI 0.73 to 1.02) and intermediate-risk (1.16, 0.97 to 1.37) patients, but higher among high-risk patients (1.91, 1.39 to 2.56).ConclusionPostpolypectomy surveillance reduces CRC risk. However, even without surveillance, CRC risk in some low-risk and intermediate-risk patients is no higher than in the general population. These patients could be managed by screening rather than surveillance.


2021 ◽  
Author(s):  
Jianxing Ma ◽  
Chen Wang

Abstract This study is to establish NMF (nonnegative matrix factorization) typing related to the tumor microenvironment (TME) of colorectal cancer (CRC) and to construct a gene model related to prognosis to be able to more accurately estimate the prognosis of CRC patients. NMF algorithm was used to classify samples merged clinical data of differentially expressed genes (DEGs) of TCGA that are related to the TME shared in The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO) datasets, and survival differences between subtype groups were compared. By using createData Partition command, TCGA database samples were randomly divided into train group and test group. Then the univariate Cox analysis, Lasso regression and multivariate Cox regression models were used to obtain risk model formula, which is used to score the samples in the train group, test group and GEO database, and to divide the samples of each group into high-risk and low-risk groups, according to the median score of the train group. After that, the model was validated. Patients with CRC were divided into 2, 3, 5 subtypes respectively. The comparison of patients with overall survival (OS) and progression-free survival (PFS) showed that the method of typing with the rank set to 5 was the most statistically significant (p=0.007, p<0.001, respectively). Moreover, the model constructed containing 14 immune-related genes (PPARGC1A, CXCL11, PCOLCE2, GABRD, TRAF5, FOXD1, NXPH4, ALPK3, KCNJ11, NPR1, F2RL2, CD36, CCNF, DUSP14) can be used as an independent prognostic factor, which is superior to some previous models in terms of patient prognosis. The 5-type typing of CRC patients and the 14 immune-related genes model constructed by us can accurately estimate the prognosis of patients with CRC.


Author(s):  
Bryan J Starkey

Colorectal cancer (CRC) causes 20 000 deaths per annum in the UK alone. Screening has been shown to reduce mortality but debate exists as to which approach to use. Direct visualization of the colorectum has the advantage that it detects lesions most effectively and is required at less frequent intervals, but the procedure is invasive and at present too costly for screening purposes. Faecal occult blood measurement, despite its limitations, is currently the recommended screening method, with follow-up of positive tests by colonoscopy or other visualization techniques. This strategy has been shown to reduce mortality from CRC by about 20% and screening trials directed towards individuals in the over 50 years age group are underway in the UK and elsewhere. Future developments in CRC screening include colorectal visualization by computed colonography - a less-invasive alternative to colonoscopy. Developments in stool analysis are also occurring. Examination of faecal samples for cellular products derived from neoplasms (e.g. calprotectin) may prove more sensitive and specific than faecal occult blood measurements. In addition, detection of altered DNA in faeces is being investigated by molecular biology techniques. Using a multi-target assay panel to detect point mutations and other neoplasia-associated DNA abnormalities may be an effective strategy for CRC screening in the future.


Circulation ◽  
2018 ◽  
Vol 137 (suppl_1) ◽  
Author(s):  
Marialaura Bonaccio ◽  
Augusto Di Castelnuovo ◽  
Simona Costanzo ◽  
Mariarosaria Persichillo ◽  
Chiara Cerletti ◽  
...  

Introduction: A life course approach has been suggested as the most appropriated to establish the actual impact of socioeconomic status (SES) on health outcomes. Hypothesis: We assessed the hypothesis that SES trajectories from childhood to adulthood are useful to better evaluate the role of SES towards mortality risk in a large general population-based cohort. Methods: Longitudinal analysis on 22,194 subjects recruited in the general population of the Moli-sani study, Italy (2005-2010). Educational attainment (low/high) and SES in adulthood (measured by a score including occupational social class, housing and overcrowding, and dichotomized as low/high) were used to define four possible trajectories both in low and high SES in childhood (age of 8). Hazard ratios (HR) with 95% confidence intervals (95%CI) were calculated by multivariable Cox regression and competing risk models. Results: Over a median follow-up of 8.3 years (182,924 person-years), 1155 all-cause, of which 414 cardiovascular (CVD), deaths were ascertained. In the group with low SES in childhood, as opposed to those stably low (low education and low SES in adulthood), an upward in both educational attainment and material factors in adulthood was associated with lower risk of both all-cause (HR=0.64; 95%CI 0.52-0.79; Table) and CVD mortality (HR=0.62; 0.43-0.88), respectively. Subjects with high childhood SES experienced an increased risk of total and CVD death in absence of higher educational attainment despite a higher SES in adulthood (HR=1.47; 1.04-2.07 and HR=1.75;1.00-3.05, respectively) as compared to the group with both high education and high SES in adulthood. Conclusions: In conclusion, for individuals with low SES in childhood, an upward of both educational attainment and material factors over the life course is associated with lower risk of total and CVD death. In advantaged groups in childhood, lack of a higher educational attainment, rather than material factors, over the life course appears to be unfavourably associated with survival.


2020 ◽  
Author(s):  
Mo Chen ◽  
Tian-en Li ◽  
Pei-zhun Du ◽  
Junjie Pan ◽  
Zheng Wang ◽  
...  

Abstract Background and aims: In this research, we aimed to construct a risk classification model to predict overall survival (OS) and locoregional surgery benefit in colorectal cancer (CRC) patients with distant metastasis.Methods: We selected a cohort consisting of 12741 CRC patients diagnosed with distant metastasis between 2010 and 2014, from the Surveillance, Epidemiology and End Results (SEER) database. Patients were randomly assigned into training group and validation group at the ratio of 2:1. Univariable and multivariable Cox regression models were applied to screen independent prognostic factors. A nomogram was constructed and assessed by the Harrell’s concordance index (C-index) and calibration plots. A novel risk classification model was further established based on the nomogram.Results: Ultimately 12 independent risk factors including race, age, marriage, tumor site, tumor size, grade, T stage, N stage, bone metastasis, brain metastasis, lung metastasis and liver metastasis were identified and adopted in the nomogram. The C-indexes of training and validation groups were 0.77 (95% confidence interval [CI] 0.73-0.81) and 0.75 (95% CI 0.72-0.78), respectively. The risk classification model stratified patients into three risk groups (low-, intermediate- and high-risk) with divergent median OS (low-risk: 36.0 months, 95% CI 34.1-37.9; intermediate-risk: 18.0 months, 95% CI 17.4-18.6; high-risk: 6.0 months, 95% CI 5.3-6.7). Locoregional therapies including surgery and radiotherapy could prognostically benefit patients in the low-risk group (surgery: hazard ratio [HR] 0.59, 95% CI 0.50-0.71; radiotherapy: HR 0.84, 95% CI 0.72-0.98) and intermediate risk group (surgery: HR 0.61, 95% CI 0.54-0.68; radiotherapy: HR 0.86, 95% CI 0.77-0.95), but not in the high-risk group (surgery: HR 1.03, 95% CI 0.82-1.29; radiotherapy: HR 1.03, 95% CI 0.81-1.31). And all risk groups could benefit from systemic therapy (low-risk: HR 0.68, 95% CI 0.58-0.80; intermediate-risk: HR 0.50, 95% CI 0.47-0.54; high-risk: HR 0.46, 95% CI 0.40-0.53).Conclusion: A novel risk classification model predicting prognosis and locoregional surgery benefit of CRC patients with distant metastasis was established and validated. This predictive model could be further utilized by physicians and be of great significance for medical practice.


2020 ◽  
Vol 35 (Supplement_3) ◽  
Author(s):  
Esther Wong ◽  
Dorothea Nitsch

Abstract Background and Aims Incidence of Acute Kidney Injury (AKI) is known to be seasonal, peaking in winter months among hospitalised patients. Previous studies have suggested that the seasonality of AKI is likely to be influenced by the seasonality of the underlying acute illnesses that are associated with AKI. Mortality of patients with AKI has also been reported as being higher in winter, reflecting well-described excess winter mortality associations. Here we describe the seasonal variations of AKI alerts in England and the associated mortality rate using linked national databases. Method Serum creatinine changes compatible with KDIGO AKI stage 1, 2 and 3 are sent by laboratories in England as AKI alerts to the treating clinicians and the UK Renal Registry (UKRR). We linked the electronic AKI alerts to the Hospital Episode Statistics (HES) data, to identify patients who were hospitalised. We carried out descriptive statistics, and investigate the seasonal effect to the 30-day patient mortality from date of getting AKI alert, using multivariable Cox regression and sequentially adjusting for age, sex, Index of multiple deprivation (IMD) and peak AKI stage Results Winter has the highest number of AKI episodes (N=81,276), which is 6% higher than that in summer (N=76,329) (Table 1). For patients who had an AKI episode and admitted to hospitals, the crude 30-day mortality is higher in the winter season when compared to the summer [HR 1.28 (1.25-1.31), p&lt;0.01] (Figure 1). After adjusting season by age, peak AKI stage, IMD and sex, winter season still has significantly higher 30-day mortality than summer [HR 1.24 (1.21-1.27), p&lt;0.01]. Winter mortality peak is confounded by age and AKI severity, which explained the drop of hazard ratio at winter peaks; whereas season is not confounded by deprivation and sex. The pattern of seasonality varies with age, in age group 18-39, there were 26.1% of AKI episodes in summer and 23.3% in winter, whereas in age group &gt;75, there were 23.7% in summer and 27.1% in winter. Conclusion Analysis of England data confirms seasonal peak in AKI during winter months. Additionally it shows increased risk of mortality for patients with AKI in winter months. Future work will investigate the impact of comorbidities and case-mix on outcomes. By understanding the seasonal variation of AKI, we can potentially plan preventive care and improve clinical practice.


2020 ◽  
Vol 105 (12) ◽  
pp. e4688-e4698
Author(s):  
Zhi Cao ◽  
Chenjie Xu ◽  
Hongxi Yang ◽  
Shu Li ◽  
Fusheng Xu ◽  
...  

Abstract Context Recent studies have suggested that a higher body mass index (BMI) and serum urate levels were associated with a lower risk of developing dementia. However, these reverse relationships remain controversial, and whether serum urate and BMI confound each other is not well established. Objectives To investigate the independent associations of BMI and urate, as well as their interaction with the risk of developing dementia. Design and Settings We analyzed a cohort of 502 528 individuals derived from the UK Biobank that included people aged 37–73 years for whom BMI and urate were recorded between 2006 and 2010. Dementia was ascertained at follow-up using electronic health records. Results During a median of 8.1 years of follow-up, a total of 2138 participants developed dementia. People who were underweight had an increased risk of dementia (hazard ratio [HR] = 1.91, 95% confidence interval [CI]: 1.24–2.97) compared with people of a healthy weight. However, the risk of dementia continued to fall as weight increased, as those who were overweight and obese were 19% (HR = 0.81, 95%: 0.73–0.90) and 22% (HR = 0.78, 95% CI: 0.68–0.88) were less likely to develop dementia than people of a healthy weight. People in the highest quintile of urate were also associated with a 25% (HR = 0.75, 95% CI: 0.64–0.87) reduction in the risk of developing dementia compared with those who were in the lowest quintile. There was a significant multiplicative interaction between BMI and urate in relation to dementia (P for interaction = 0.004), and obesity strengthens the protective effect of serum urate on the risk of dementia. Conclusion Both BMI and urate are independent predictors of dementia, and there are inverse monotonic and dose-response associations of BMI and urate with dementia.


2002 ◽  
Vol 34 ◽  
pp. A65
Author(s):  
A. Pezzoli ◽  
V.G. Matarese ◽  
M. Brancaleoni ◽  
P. Buldrini ◽  
C. Rizzo ◽  
...  

PPAR Research ◽  
2009 ◽  
Vol 2009 ◽  
pp. 1-9 ◽  
Author(s):  
Ashlee B. Carter ◽  
Sarah A. Misyak ◽  
Raquel Hontecillas ◽  
Josep Bassaganya-Riera

Mounting evidence suggests that the risk of developing colorectal cancer (CRC) is dramatically increased for patients with chronic inflammatory diseases. For instance, patients with Crohn's Disease (CD) or Ulcerative Colitis (UC) have a 12–20% increased risk for developing CRC. Preventive strategies utilizing nontoxic natural compounds that modulate immune responses could be successful in the suppression of inflammation-driven colorectal cancer in high-risk groups. The increase of peroxisome proliferator-activated receptor-γ(PPAR-γ) expression and its transcriptional activity has been identified as a target for anti-inflammatory efforts, and the suppression of inflammation-driven colon cancer. PPARγdown-modulates inflammation and elicits antiproliferative and proapoptotic actions in epithelial cells. All of which may decrease the risk for inflammation-induced CRC. This review will focus on the use of orally active, naturally occurring chemopreventive approaches against inflammation-induced CRC that target PPARγand therefore down-modulate inflammation.


Sign in / Sign up

Export Citation Format

Share Document