Development and Validation of a Risk Score to Predict the First Hip Fracture in the Oldest Old: A Retrospective Cohort Study

2019 ◽  
Vol 75 (5) ◽  
pp. 980-986 ◽  
Author(s):  
Ming-Tuen Lam ◽  
Chor-Wing Sing ◽  
Gloria H Y Li ◽  
Annie W C Kung ◽  
Kathryn C B Tan ◽  
...  

Abstract Background To evaluate whether the common risk factors and risk scores (FRAX, QFracture, and Garvan) can predict hip fracture in the oldest old (defined as people aged 80 and older) and to develop an oldest-old-specific 10-year hip fracture prediction risk algorithm. Methods Subjects aged 80 years and older without history of hip fracture were studied. For the derivation cohort (N = 251, mean age = 83), participants were enrolled with a median follow-up time of 8.9 years. For the validation cohort (N = 599, mean age = 85), outpatients were enrolled with a median follow-up of 2.6 years. A five-factor risk score (the Hong Kong Osteoporosis Study [HKOS] score) for incident hip fracture was derived and validated, and its predictive accuracy was evaluated and compared with other risk scores. Results In the derivation cohort, the C-statistics were .65, .61, .65, .76, and .78 for FRAX with bone mineral density (BMD), FRAX without BMD, QFracture, Garvan, and the HKOS score, respectively. The category-less net reclassification index and integrated discrimination improvement of the HKOS score showed a better reclassification of hip fracture than FRAX and QFracture (all p < .001) but not Garvan, while Garvan, but not HKOS score, showed a significant over-estimation in fracture risk (Hosmer–Lemeshow test p < .001). In the validation cohort, the HKOS score had a C-statistic of .81 and a considerable agreement between expected and observed fracture risk in calibration. Conclusion The HKOS score can predict 10-year incident hip fracture among the oldest old in Hong Kong. The score may be useful in identifying the oldest old patients at risk of hip fracture in both community-dwelling and hospital settings.

2017 ◽  
Vol 8 (1) ◽  
pp. 9-15
Author(s):  
Nazma Akter ◽  
Nazmul Kabir Qureshi ◽  
Zafar Ahmed Latif

Background: This study was designed to assess the effectiveness of use of the fracture risk assessment system (FRAX) as risk assessment tool for osteoporosis risk score scale in Bangladeshi subjects and to assess how the results of the tools correlate with each other.Methods: This cross-sectional study was conducted between January 2016 to August 2016. The study population was randomly collected 600 Bangladeshi subjects; who attended outpatient department (OPD) of MARKS Medical College & Hospital, Dhaka, Bangladesh. The age range of the subjects was between 40 to 75 years. The subjects had not done a bone mineral density (BMD) score. None of them were previously diagnosed or got treatment for osteoporosis. A questionnaire was designed to complete the osteoporosis specific risk score sheet. Major osteoporotic and hip fracture incidence to 10-years as a function of the FRAX probability was calculated by using fracture risk assessment system.Results: A total of 600 subjects were included. Among them, 59.2% and 40.8%were male and female respectively. Mean age (Mean ± SD) of the study, subjects were 52.16±7.96 years. Among study subjects, mean BMI was more in females in comparison to males (p<0.05). The FRAX predicted 10-year risk assessment scores of major osteoporotic fractures were significantly more in females than males (p<0.02). Risk assessment scores of both major osteoporotic fractures and hip fractures showed significant association in post-menopausal women when compared with there who were not menopausal (p<0.05). Risk assessment factors for risk scores did differ significantly among male and female subjects and among postmenopausal and non-menopausal women. Among risk assessment factors, subjects having finally history of fracture hip, glucocorticoids, rheumatoid arthritis showed strong association with presence of ≥20% risk scores for major osteoporotic fracture (p<0.05) and ≥ 3% for hip fracture (p<0.05). Subjects having history of previous fracture and secondary osteoporosis showed only significant association with ≥3% risk scores for hip fracture (p<0.05).Conclusion: The public health burden of fractures will fail to compromise unless the subset of patients who are at increased risk for fracture are identified and treated. Ten-year fracture risk assessment with the fracture risk assessment system is increasingly used to guide for treatment decisions. It is an effective tool to predict fracture probability, particularly in developing countries like Bangladesh, where most of the patients cannot afford expensive dual energy x-ray absorptiometry scans.Birdem Med J 2018; 8(1): 9-15


2020 ◽  
Vol 4 (Supplement_1) ◽  
Author(s):  
Bu Beng Yeap ◽  
Helman Alfonso ◽  
Paul Chubb ◽  
Jacqueline Center ◽  
Jonathan Beilin ◽  
...  

Abstract Osteoporosis resulting in bone fractures is a major cause of morbidity in older men. Previous studies implicated reduced exposure to estradiol (E2) with increased fracture risk in men. The extent to which circulating androgens contribute to maintenance of bone health is uncertain. We examined associations of different sex hormones with incidence of any bone fracture or hip fracture in older men. We analysed 3,307 community-dwelling men aged 76.8±3.5 years, median follow-up period of 10.6 years. Medical information was collected by questionnaire. Frailty was assessed using the FRAIL scale (1). Early morning plasma testosterone (T), dihydrotestosterone (DHT) and E2 were assayed by mass spectrometry, sex hormone-binding globulin (SHBG) and luteinising hormone (LH) by immunoassay. Incidence of any fracture and hip fracture were determined via data linkage to emergency department presentations and hospital admissions. Risk of fracture according to sex hormone concentrations was analysed. Hazard ratio of fracture according to sex hormone quartiles (Q1-4) was assessed using Cox regression models adjusted for age, medical comorbidities and frailty. In 30,355 participant-years of follow-up, the incidence of any fracture was 1.1% and hip fracture 0.5% per participant per year. Incident fractures occurred in 330 men, including 144 hip fractures. Probability plots suggested non-linear relationships between hormones and risk of any fracture and hip fracture, with higher risk at lower and higher concentrations of plasma T, lower E2, higher SHBG and higher LH. In fully-adjusted models, there was a U-shaped association of plasma T with incidence of any fracture (Q1: reference group, Q2: fully-adjusted hazard ratio [HR]=0.69, 95% confidence interval [CI]=0.51-0.94, p=0.020; Q3: HR=0.59, CI=0.42-0.83, p=0.002; Q4: 0.85, CI=0.62-1.18, p=0.335). A similar U-shaped association of T was found with incidence of hip fracture (Q1: HR=1.0; Q2: HR=0.60, CI=0.37-0.93, p=0.043; Q3: HR=0.52, CI=0.31-0.88, p=0.015; Q4: HR=1.04, CI=0.65-1.68, p=0.866). DHT, E2 and LH were not associated with incidence of any fracture or hip fracture (all p&gt;0.050). SHBG was not associated with incidence of any fracture, but was associated with hip fracture (Q4 vs Q1: HR=1.76, CI=1.05-2.96, p=0.033). In conclusion, we found a non-linear or U-shaped association of T with fracture risk, with no association of E2. Mid-range plasma T was associated with lower incidence of any fracture and hip fracture, and higher SHBG with increased risk of hip fracture. Circulating androgen rather than estrogen may be a biomarker for hormone effects on bone driving fracture risk. A randomised controlled trial of T therapy powered for the outcome of fracture may be warranted and should recruit men with baseline T in the lowest quartile of values. Reference: (1) Hyde Z, et al. J Clin Endocrinol Metab 2010; 95: 3165-3172.


Author(s):  
Ilse Bloom ◽  
Anna Pilgrim ◽  
Karen A. Jameson ◽  
Elaine M. Dennison ◽  
Avan A. Sayer ◽  
...  

Abstract Objectives To identify early nutritional risk in older populations, simple screening approaches are needed. This study aimed to compare nutrition risk scores, calculated from a short checklist, with diet quality and health outcomes, both at baseline and prospectively over a 2.5-year follow-up period; the association between baseline scores and risk of mortality over the follow-up period was assessed. Methods The study included 86 community-dwelling older adults in Southampton, UK, recruited from outpatient clinics. At both assessments, hand grip strength was measured using a Jamar dynamometer. Diet was assessed using a short validated food frequency questionnaire; derived ‘prudent’ diet scores described diet quality. Body mass index (BMI) was calculated and weight loss was self-reported. Nutrition risk scores were calculated from a checklist adapted from the DETERMINE (range 0–17). Results The mean age of participants at baseline (n = 86) was 78 (SD 8) years; half (53%) scored ‘moderate’ or ‘high’ nutritional risk, using the checklist adapted from DETERMINE. In cross-sectional analyses, after adjusting for age, sex and education, higher nutrition risk scores were associated with lower grip strength [difference in grip strength: − 0.09, 95% CI (− 0.17, − 0.02) SD per unit increase in nutrition risk score, p = 0.017] and poorer diet quality [prudent diet score: − 0.12, 95% CI (− 0.21, − 0.02) SD, p = 0.013]. The association with diet quality was robust to further adjustment for number of comorbidities, whereas the association with grip strength was attenuated. Nutrition risk scores were not related to reported weight loss or BMI at baseline. In longitudinal analyses there was an association between baseline nutrition risk score and lower grip strength at follow-up [fully-adjusted model: − 0.12, 95% CI (− 0.23, − 0.02) SD, p = 0.024]. Baseline nutrition risk score was also associated with greater risk of mortality [unadjusted hazard ratio per unit increase in score: 1.29 (1.01, 1.63), p = 0.039]; however, this association was attenuated after adjustment for sex and age. Conclusions Cross-sectional associations between higher nutrition risk scores, assessed from a short checklist, and poorer diet quality suggest that this approach may hold promise as a simple way of screening older populations. Further larger prospective studies are needed to explore the predictive ability of this screening approach and its potential to detect nutritional risk in older adults.


Author(s):  
Paul Welsh ◽  
Claire E. Welsh ◽  
Pardeep S. Jhund ◽  
Mark Woodward ◽  
Rosemary Brown ◽  
...  

Background: Abdominal aortic aneurysm (AAA) can occur in patients who are ineligible for routine ultrasound screening. A simple AAA risk score was derived and compared to current guidelines used for ultrasound screening of AAA. Methods: UK Biobank participants without previous AAA were split into a derivation cohort (n=401,820, 54.6% women, mean age 56.4 years, 95.5% white race) and validation cohort (n=83,816). Incident AAA was defined as first hospital inpatient diagnosis of AAA, death from AAA, or an AAA-related surgical procedure. A multivariable Cox model was developed in the derivation cohort into an AAA risk score that did not require blood biomarkers. To illustrate the sensitivity and specificity of the risk score for AAA, a theoretical threshold to refer patients for ultrasound at 0.25% 10-year risk was modelled. Discrimination of the risk score was compared to a model of US Preventive Services Task Force (USPSTF) AAA screening guidelines. Results: In the derivation cohort there were 1,570 (0.40%) cases of AAA over a median 11.3 years of follow-up. Components of the AAA risk score were age (stratified by smoking status), weight (stratified by smoking status), antihypertensive and cholesterol lowering medication use, height, diastolic blood pressure, baseline cardiovascular disease, and diabetes. In the validation cohort, over ten years of follow-up, the C-index for the model of the USPSTF guidelines was 0.705 (95% CI 0.678, 0.733). The C-index of the risk score as a continuous variable was 0.856 (95%CI 0.837-0.878). In the validation cohort, the USPSTF model yielded sensitivity 63.9% and specificity 71.3%. At the 0.25% 10-year risk threshold, the risk score yielded sensitivity 82.1% and specificity 70.7%, while also improving the net reclassification index (NRI) compared to the USPSTF model +0.176 (95%CI 0.120, 0.232). A combined model, whereby risk scoring was combined with the USPSTF model, also improved prediction compared to USPSTF alone (NRI +0.101, 95%CI 0.055, 0.147). Conclusions: In an asymptomatic general population, a risk score based on patient age, height, weight and medical history may improve identification of asymptomatic patients at risk for clinical events from AAA. Further development and validation of risk scores to detect asymptomatic AAA is needed.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
P Meyre ◽  
S Aeschbacher ◽  
S Blum ◽  
M Coslovsky ◽  
J.H Beer ◽  
...  

Abstract Background Patients with atrial fibrillation (AF) have a high risk of hospital admissions, but there is no validated prediction tool to identify those at highest risk. Purpose To develop and externally validate a risk score for all-cause hospital admissions in patients with AF. Methods We used a prospective cohort of 2387 patients with established AF as derivation cohort. Independent risk factors were selected from a broad range of variables using the least absolute shrinkage and selection operator (LASSO) method fit to a Cox regression model. The developed risk score was externally validated in a separate prospective, multicenter cohort of 1300 AF patients. Results In the derivation cohort, 891 patients (37.3%) were admitted to the hospital over a median follow-up 2.0 years. In the validation cohort, hospital admissions occurred in 719 patients (55.3%) during a median follow-up 1.9 years. The most important predictors for admission were age (75–79 years: adjusted hazard ratio [aHR], 1.33; 95% confidence interval [95% CI], 1.00–1.77; 80–84 years: aHR, 1.51; 95% CI, 1.12–2.03; ≥85 years: aHR, 1.88; 95% CI, 1.35–2.61), prior pulmonary vein isolation (aHR, 0.74; 95% CI, 0.60–0.90), hypertension (aHR, 1.16; 95% CI, 0.99–1.36), diabetes (aHR, 1.38; 95% CI, 1.17–1.62), coronary heart disease (aHR, 1.18; 95% CI, 1.02–1.37), prior stroke/TIA (aHR, 1.28; 95% CI, 1.10–1.50), heart failure (aHR, 1.21; 95% CI, 1.04–1.41), peripheral artery disease (aHR, 1.31; 95% CI, 1.06–1.63), cancer (aHR, 1.33; 95% CI, 1.13–1.57), renal failure (aHR, 1.18, 95% CI, 1.01–1.38), and previous falls (aHR, 1.44; 95% CI, 1.16–1.78). A risk score with these variables was well calibrated, and achieved a C-index of 0.64 in the derivation and 0.59 in the validation cohort. Conclusions Multiple risk factors were associated with hospital admissions in AF patients. This prediction tool selects high-risk patients who may benefit from preventive interventions. The Admit-AF risk score Funding Acknowledgement Type of funding source: Public grant(s) – National budget only. Main funding source(s): The Swiss National Science Foundation (Grant numbers 33CS30_1148474 and 33CS30_177520), the Foundation for Cardiovascular Research Basel and the University of Basel


2009 ◽  
Vol 37 (3) ◽  
pp. 392-398 ◽  
Author(s):  
D. A. Story ◽  
M. Fink ◽  
K. Leslie ◽  
P. S. Myles ◽  
S.-J. Yap ◽  
...  

We developed a risk score for 30-day postoperative mortality: the Perioperative Mortality risk score. We used a derivation cohort from a previous study of surgical patients aged 70 years or more at three large metropolitan teaching hospitals, using the significant risk factors for 30-day mortality from multivariate analysis. We summed the risk score for each of six factors creating an overall Perioperative Mortality score. We included 1012 patients and the 30-day mortality was 6%. The three preoperative factors and risk scores were (“three A's”): 1) age, years: 70 to 79=1, 80 to 89=3, 90+=6; 2) ASA physical status: ASA I or II=0, ASA III=3, ASA IV=6, ASA V=15; and 3) preoperative albumin <30 g/l=2.5. The three postoperative factors and risk scores were (“three I's”) 1) unplanned intensive care unit admission =4.0; 2) systemic inflammation =3; and 3) acute renal impairment=2.5. Scores and mortality were: <5=1%, 5 to 9.5=7% and ≥10=26%. We also used a preliminary validation cohort of 256 patients from a regional hospital. The area under the receiver operating characteristic curve (C-statistic) for the derivation cohort was 0.80 (95% CI 0.74 to 0.86) similar to the validation C-statistic: 0.79 (95% CI 0.70 to 0.88), P=0.88. The Hosmer-Lemeshow test (P=0.35) indicated good calibration in the validation cohort. The Perioperative Mortality score is straightforward and may assist progressive risk assessment and management during the perioperative period. Risk associated with surgical complexity and urgency could be added to this baseline patient factor Perioperative Mortality score.


2020 ◽  
Vol 14 (Supplement_1) ◽  
pp. S069-S070
Author(s):  
N Borren ◽  
D Plichta ◽  
A joshi ◽  
G Bonilla ◽  
R Sadreyev ◽  
...  

Abstract Background Inflammatory bowel diseases (IBD) are characterised by intermittent relapses and their course is heterogeneous and often unpredictable. The ability of protein, metabolite, or microbial biomarkers to predict relapse in patients with quiescent disease is unknown. Methods This prospective study enrolled patients with quiescent Crohn’s disease (CD) and Ulcerative colitis (UC) defined as absence of clinical symptoms (HBI &lt; 4, SCCAI &lt; 2) and a colonoscopy within the 1 year prior with endoscopic remission. The primary outcome was relapse within 2 years, defined as symptomatic worsening accompanied by elevated inflammatory markers resulting in a change in therapy, or IBD-related hospitalisation or surgery. Metabolomic and proteomic profiling was performed on serum samples and stool samples underwent shotgun metagenomic sequencing on Illumina HiSeq platform. Biomarkers were tested in a derivation cohort and their performance examined in an independent validation cohort. Results Our prospective cohort study included 164 patients with IBD (108 CD, 56 UC). Upon follow-up for a median of 1 year, 22 patients (13.4%) experienced a relapse. Three protein biomarkers (IL-10, GDNF, and CD8A) and four metabolomic markers (propionyl-l-carnitine, carnitine, sarcosine, and sorbitol) were associated with relapse in multivariable models at p &lt; 0.05. Proteomic and metabolomic risk scores, defined as the tertile sum of the serum levels of these biomarkers independent predicted relapse with a combined AUC of 0.83. This risk score clearly differentiated relapsers from those who remained in remission in both the derivation and validation cohort (Figure 2). A high proteomic risk score (OR 9.11, 95% CI 1.90–43.61) or metabolomic risk score (OR 5.79, 95% CI 1.24–27.11) independently predicted higher risk of relapse over 2 years. Faecal metagenomics from 85 patients demonstrated increased abundance of Proteobacteria (p = 0.0019, q = 0.019) and Fusobacteria (p = 0.0040, q=0.020) and at the species-level Lachnospiraceae_bacterium_2_1_58FAA (p = 0.000008, q = 0.0009) among relapses. Proinflammatory changes in the microbiome correlated with the metabolomic and proteomic perturbations associated with relapse. Conclusion Proteomic, metabolomic, and microbial biomarkers identify a pro-inflammatory state in quiescent IBD that predisposes to clinical relapse.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
J.M Leerink ◽  
H.J.H Van Der Pal ◽  
E.A.M Feijen ◽  
P.G Meregalli ◽  
M.S Pourier ◽  
...  

Abstract Background Childhood cancer survivors (CCS) treated with anthracyclines and/or chest-directed radiotherapy receive life-long echocardiographic surveillance to detect cardiomyopathy early. Current risk stratification and surveillance frequency recommendations are based on anthracycline- and chest-directed radiotherapy dose. We assessed the added prognostic value of an initial left ventricular ejection fraction (EF) measurement at &gt;5 years after cancer diagnosis. Patients and methods Echocardiographic follow-up was performed in asymptomatic CCS from the Emma Children's Hospital (derivation; n=299; median time after diagnosis, 16.7 years [inter quartile range (IQR) 11.8–23.15]) and from the Radboud University Medical Center (validation; n=218, median time after diagnosis, 17.0 years [IQR 13.0–21.7]) in the Netherlands. CCS with cardiomyopathy at baseline were excluded (n=16). The endpoint was cardiomyopathy, defined as a clinically significant decreased EF (EF&lt;40%). The predictive value of the initial EF at &gt;5 years after cancer diagnosis was analyzed with multivariable Cox regression models in the derivation cohort and the model was validated in the validation cohort. Results The median follow-up after the initial EF was 10.9 years and 8.9 years in the derivation and validation cohort, respectively, with cardiomyopathy developing in 11/299 (3.7%) and 7/218 (3.2%), respectively. Addition of the initial EF on top of anthracycline and chest radiotherapy dose increased the C-index from 0.75 to 0.85 in the derivation cohort and from 0.71 to 0.92 in the validation cohort (p&lt;0.01). The model was well calibrated at 10-year predicted probabilities up to 5%. An initial EF between 40–49% was associated with a hazard ratio of 6.8 (95% CI 1.8–25) for development of cardiomyopathy during follow-up. For those with a predicted 10-year cardiomyopathy probability &lt;3% (76.9% of the derivation cohort and 74.3% of validation cohort) the negative predictive value was &gt;99% in both cohorts. Conclusion The addition of the initial EF &gt;5 years after cancer diagnosis to anthracycline- and chest-directed radiotherapy dose improves the 10-year cardiomyopathy prediction in CCS. Our validated prediction model identifies low-risk survivors in whom the surveillance frequency may be reduced to every 10 years. Calibration in both cohorts Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): Dutch Heart Foundation


2021 ◽  
pp. 1-14
Author(s):  
Magdalena I. Tolea ◽  
Jaeyeong Heo ◽  
Stephanie Chrisphonte ◽  
James E. Galvin

Background: Although an efficacious dementia-risk score system, Cardiovascular Risk Factors, Aging, and Dementia (CAIDE) was derived using midlife risk factors in a population with low educational attainment that does not reflect today’s US population, and requires laboratory biomarkers, which are not always available. Objective: Develop and validate a modified CAIDE (mCAIDE) system and test its ability to predict presence, severity, and etiology of cognitive impairment in older adults. Methods: Population consisted of 449 participants in dementia research (N = 230; community sample; 67.9±10.0 years old, 29.6%male, 13.7±4.1 years education) or receiving dementia clinical services (N = 219; clinical sample; 74.3±9.8 years old, 50.2%male, 15.5±2.6 years education). The mCAIDE, which includes self-reported and performance-based rather than blood-derived measures, was developed in the community sample and tested in the independent clinical sample. Validity against Framingham, Hachinski, and CAIDE risk scores was assessed. Results: Higher mCAIDE quartiles were associated with lower performance on global and domain-specific cognitive tests. Each one-point increase in mCAIDE increased the odds of mild cognitive impairment (MCI) by up to 65%, those of AD by 69%, and those for non-AD dementia by >  85%, with highest scores in cases with vascular etiologies. Being in the highest mCAIDE risk group improved ability to discriminate dementia from MCI and controls and MCI from controls, with a cut-off of ≥7 points offering the highest sensitivity, specificity, and positive and negative predictive values. Conclusion: mCAIDE is a robust indicator of cognitive impairment in community-dwelling seniors, which can discriminate well between dementia severity including MCI versus controls. The mCAIDE may be a valuable tool for case ascertainment in research studies, helping flag primary care patients for cognitive testing, and identify those in need of lifestyle interventions for symptomatic control.


2020 ◽  
Vol 41 (35) ◽  
pp. 3325-3333 ◽  
Author(s):  
Taavi Tillmann ◽  
Kristi Läll ◽  
Oliver Dukes ◽  
Giovanni Veronesi ◽  
Hynek Pikhart ◽  
...  

Abstract Aims Cardiovascular disease (CVD) risk prediction models are used in Western European countries, but less so in Eastern European countries where rates of CVD can be two to four times higher. We recalibrated the SCORE prediction model for three Eastern European countries and evaluated the impact of adding seven behavioural and psychosocial risk factors to the model. Methods and results We developed and validated models using data from the prospective HAPIEE cohort study with 14 598 participants from Russia, Poland, and the Czech Republic (derivation cohort, median follow-up 7.2 years, 338 fatal CVD cases) and Estonian Biobank data with 4632 participants (validation cohort, median follow-up 8.3 years, 91 fatal CVD cases). The first model (recalibrated SCORE) used the same risk factors as in the SCORE model. The second model (HAPIEE SCORE) added education, employment, marital status, depression, body mass index, physical inactivity, and antihypertensive use. Discrimination of the original SCORE model (C-statistic 0.78 in the derivation and 0.83 in the validation cohorts) was improved in recalibrated SCORE (0.82 and 0.85) and HAPIEE SCORE (0.84 and 0.87) models. After dichotomizing risk at the clinically meaningful threshold of 5%, and when comparing the final HAPIEE SCORE model against the original SCORE model, the net reclassification improvement was 0.07 [95% confidence interval (CI) 0.02–0.11] in the derivation cohort and 0.14 (95% CI 0.04–0.25) in the validation cohort. Conclusion Our recalibrated SCORE may be more appropriate than the conventional SCORE for some Eastern European populations. The addition of seven quick, non-invasive, and cheap predictors further improved prediction accuracy.


Sign in / Sign up

Export Citation Format

Share Document