scholarly journals Timing of cranioplasty: a 10.75-year single-center analysis of 754 patients

2018 ◽  
Vol 128 (6) ◽  
pp. 1648-1652 ◽  
Author(s):  
Ryan P. Morton ◽  
Isaac Josh Abecassis ◽  
Josiah F. Hanson ◽  
Jason K. Barber ◽  
Mimi Chen ◽  
...  

OBJECTIVEDespite their technical simplicity, cranioplasty procedures carry high reported morbidity rates. The authors here present the largest study to date on complications after cranioplasty, focusing specifically on the relationship between complications and timing of the operation.METHODSThe authors retrospectively reviewed all cranioplasty cases performed at Harborview Medical Center over the past 10.75 years. In addition to relevant clinical and demographic characteristics, patient morbidity and mortality data were abstracted from the electronic medical record. Cox proportional-hazards models were used to analyze variables potentially associated with the risk of infection, hydrocephalus, seizure, hematoma, and bone flap resorption.RESULTSOver the course of 10.75 years, 754 cranioplasties were performed at a single institution. Sixty percent of the patients who underwent these cranioplasties were male, and the median follow-up overall was 233 days. The 30-day mortality rate was 0.26% (2 cases, both due to postoperative epidural hematoma). Overall, 24.6% percent of the patients experienced at least 1 complication including infection necessitating explantation of the flap (6.6%), postoperative hydrocephalus requiring a shunt (9.0%), resorption of the flap requiring synthetic cranioplasty (6.3%), seizure (4.1%), postoperative hematoma requiring evacuation (2.3%), and other (1.6%).The rate of infection was significantly higher if the cranioplasty had been performed < 14 days after the initial craniectomy (p = 0.007, Holm-Bonferroni–adjusted p = 0.028). Hydrocephalus was significantly correlated with time to cranioplasty (OR 0.92 per 10-day increase, p < 0.001) and was most common in patients whose cranioplasty had been performed < 90 days after initial craniectomy. New-onset seizure, however, only occurred in patients who had undergone their cranioplasty > 90 days after initial craniectomy. Bone flap resorption was the least likely complication for patients whose cranioplasty had been performed between 15 and 30 days after initial craniectomy. Resorption was also correlated with patient age, with a hazard ratio of 0.67 per increase of 10 years of age (p = 0.001).CONCLUSIONSCranioplasty performed between 15 and 30 days after initial craniectomy may minimize infection, seizure, and bone flap resorption, whereas waiting > 90 days may minimize hydrocephalus but may increase the risk of seizure.

2019 ◽  
Vol 44 (4) ◽  
pp. 604-614 ◽  
Author(s):  
Gianmarco Lombardi ◽  
Pietro Manuel Ferraro ◽  
Luca Calvaruso ◽  
Alessandro Naticchia ◽  
Silvia D’Alonzo ◽  
...  

Background/Aims: Aim of our study was to describe the association between natremia (Na) fluctuation and hospital mortality in a general population admitted to a tertiary medical center. Methods: We performed a retrospective observational cohort study on the patient population admitted to the Fondazione Policlinico A. Gemelli IRCCS Hospital between January 2010 and December 2014 with inclusion of adult patients with at least 2 Na values available and with a normonatremic condition at hospital admission. Patients were categorized according to all Na values recorded during hospital stay in the following groups: normonatremia, hyponatremia, hypernatremia, and mixed dysnatremia. The difference between the highest or the lowest Na value reached during hospital stay and the Na value read at hospital admission was used to identify the maximum Na fluctuation. Cox proportional hazards models were used to estimate hazard ratios (HRs) for in-hospital death in the groups with dysnatremias and across quartiles of Na fluctuation. Covariates assessed were age, sex, highest and lowest Na level, Charlson/Deyo score, cardiovascular diseases, cerebrovascular diseases, dementia, congestive heart failure, severe kidney disease, estimated glomerular filtration rate, and number of Na measurements during hospital stay. Results: 46,634 admissions matched inclusion criteria. Incident dysnatremia was independently associated with in-hospital mortality (hyponatremia: HR 3.11, 95% CI 2.53, 3.84, p < 0.001; hypernatremia: HR 5.12, 95% CI 3.94, 6.65, p < 0.001; mixed-dysnatremia: HR 4.94, 95% CI 3.08, 7.92, p < 0.001). We found a higher risk of in-hospital death by linear increase of quartile of Na fluctuation (p trend <0.001) irrespective of severity of dysnatremia (HR 2.34, 95% CI 1.55, 3.54, p < 0.001, for the highest quartile of Na fluctuation compared with the lowest). Conclusions: Incident dysnatremia is associated with higher hospital mortality. Fluctuation of Na during hospital stay is a prognostic marker for hospital death independent of dysnatremia severity.


2021 ◽  
Author(s):  
Je Hun Song ◽  
Hyuk Huh ◽  
Eunjin Bae ◽  
Jeonghwan Lee ◽  
Jung Pyo Lee ◽  
...  

Abstract Background: Hyperhomocysteinemia (HHcy) is considered a risk factor for cardiovascular disease (CVD) including chronic kidney disease (CKD). In this study, we investigated the association between serum homocysteine (Hcy) level and mortality according to the presence of CKD.Methods: Our study included data of 9,895 participants from the 1996–2016 National Health and Nutrition Examination Surveys (NHANES). Moreover, linked mortality data were included and classified into four groups according to the Hcy level. Multivariable-adjusted Cox proportional hazards models using propensity-score were used to examine dose-response associations between Hcy level and mortality.Results: Of 9,895 participants, 1032 (21.2%) participants were diagnosed with CKD. In a multivariate Cox regression analysis including all participants, Hcy level was associated with all-cause mortality, compared with the 1st quartile in Model 3 (2nd quartile: hazard ratio (HR) 1.751, 95% confidence interval (CI) 1.348-2.274, p<0.001; 3rd quartile: HR 2.220, 95% CI 1.726-2.855, p<0.001; 4th quartile: HR 3.776, 95% CI 2.952-4.830, p<0.001). In the non-CKD group, there was a significant association with all-cause mortality; however, this finding was not observed in the CKD group. The observed pattern was similar after propensity score matching. In the non-CKD group, overall mortality increased in proportion to Hcy concentration (2nd quartile: HR 2.195, 95% CI 1.299-3.709, p = 0.003; 3rd quartile: HR 2.607, 95% CI 1.570-4.332, p<0.001; 4th quartile: HR 3.720, 95% CI 2.254-6.139, p<0.001). However, the risk of all-cause mortality according to the quartile of Hcy level did not increase in the CKD groupConclusion: This study found a correlation between the Hcy level and mortality rate only in the non-CKD group. This altered risk factor patterns may be attributed to protein-energy wasting or chronic inflammation status that is accompanied by CKD.


Circulation ◽  
2016 ◽  
Vol 133 (suppl_1) ◽  
Author(s):  
Zach Conrad ◽  
Colin Rehm ◽  
Dariush Mozaffarian

Introduction: The Supplemental Nutrition Assistance Program (SNAP) is the largest food assistance program for low-income Americans. Investigating mortality in this population is crucial to determining what further efforts are needed to reduce health disparities. The National Center for Health Statistics (NCHS) does not provide mortality data by SNAP participation status, so diet-related mortality according to SNAP eligibility and participation is not well established. Objective: To examine cardiometabolic mortality among SNAP participants, SNAP eligible non-participants, and the SNAP ineligible population. Methods: We used data from the National Health Interview Survey for 499,741 US adults age≥25y from 2000-2009 to assess SNAP eligibility and participation. These data were merged with the NCHS Linked Mortality file (2000-2009) to create a nationally representative cohort. Participants were followed until death or through Dec 31, 2011. Survey-weighted Cox-proportional hazards models were used to estimate hazard ratios of cause-specific mortality by SNAP eligibility and participation. Results: Over a mean of 6.8 y of follow-up (maximum 11.9 y), we observed 7408 CHD deaths, 2185 stroke deaths and 1376 diabetes deaths. For all outcomes, in particular diabetes, SNAP participants had highest risk, followed by SNAP eligible non-participants, and then SNAP-ineligible individuals (Figure, panel A). Considerable differences in cause-specific risk of mortality were observed between race/ethnicities among SNAP participants, SNAP eligible non-participants, and the SNAP ineligible population (Figure, panel B). Conclusion: Major health disparities exist between SNAP participants, SNAP eligible non-participants, and SNAP ineligible Americans, as well as by race/ethnicity. Ways to improve health outcomes of SNAP participants, including potential revisions to SNAP programming, are urgently needed to reduce these inequities.


2017 ◽  
Vol 27 (2) ◽  
pp. 77 ◽  
Author(s):  
Eric A. Miller ◽  
Frances A. McCarty ◽  
Jennifer D. Parker

<p class="Pa8"><strong>Objectives: </strong>Differences in the availability of a Social Security Number (SSN) by race/ ethnicity could affect the ability to link with death certificate data in passive follow-up studies and possibly bias mortality dispari­ties reported with linked data. Using 1989- 2009 National Health Interview Survey (NHIS) data linked with the National Death Index (NDI) through 2011, we compared the availability of a SSN by race/ethnicity, estimated the percent of links likely missed due to lack of SSNs, and assessed if these estimated missed links affect race/ethnic­ity disparities reported in the NHIS-linked mortality data.</p><p class="Pa8"><strong>Methods: </strong>We used preventive fraction methods based on race/ethnicity-specific Cox proportional hazards models of the relationship between availability of SSN and mortality based on observed links, adjusted for survey year, sex, age, respondent-rated health, education, and US nativity.</p><p class="Pa8"><strong>Results: </strong>Availability of a SSN and observed percent linked were significantly lower for Hispanic and Asian/Pacific Islander (PI) participants compared with White non-His­panic participants. We estimated that more than 18% of expected links were missed due to lack of SSNs among Hispanic and Asian/PI participants compared with about 10% among White non-Hispanic partici­pants. However, correcting the observed links for expected missed links appeared to only have a modest impact on mortality disparities by race/ethnicity.</p><p class="Default"><strong>Conclusions: </strong>Researchers conducting analyses of mortality disparities using the NDI or other linked death records, need to be cognizant of the potential for differential linkage to contribute to their results. <em></em></p><p class="Default"><em>Ethn Dis. </em>2017;27(2):77-84; doi:10.18865/ ed.27.2.77</p>


2017 ◽  
Vol 35 (6_suppl) ◽  
pp. 67-67
Author(s):  
Daphna Spiegel ◽  
Julian C. Hong ◽  
W. Robert Lee ◽  
Joseph Kamel Salama

67 Background: Combined androgen deprivation therapy (ADT) and radiation therapy (RT) is a frequently used localized prostate cancer (PC) treatment. Testosterone recovery (TR) after combined ADT-RT is not well-characterized. We studied TR in men who received RT and either short-term (ST) ADT or long-term (LT) ADT with LHRH agonists. Methods: We identified consecutive localized PC patients treated with ADT-RT at the Durham VA Medical Center (DVAMC) from 1/2011-10/2016. All patients had a documented baseline testosterone (T) level. Individual patient records were reviewed. TR was defined as time from last ADT injection to T normalization ( > 240 ng/dL). The Kaplan-Meier method was used to estimate time to TR. Cox proportional hazards models were generated to identify TR predictors with a nomogram built based on a parsimonious multivariate model. Results: 252 patients were identified. Median follow-up was 26.7 months. Median age was 65. Prior to treatment, 69% had a normal baseline T. 67% were treated with STADT, median duration 6 months. 33% were treated with LTADT, median duration 18 months. Median time for TR was 22.6 months for all patients (19.5 months for STADT and 25.6 months for LTADT). At 1 and 2 years post ADT, estimated TR was 13% and 53% (17% and 57% for STADT and 3% and 42% for LTADT). 2-year biochemical control was 99.2% and 97.6% for STADT and LTADT, respectively; 98.9% and 98.6% for those with and without TR, respectively. On multivariate analysis, higher pre-treatment T (HR = 1.004 95% CI 1.003-1.006, p < 0.001), use of STADT (HR = 2.48 95% CI 1.45-4.25, p = 0.001), and lower BMI (HR = 0.95 95% CI 0.91-0.98, p = 0.001) were associated with shorter time to TR. White race was a negative TR predictor (HR = 0.65 95% CI 0.43-0.9992, p = 0.049). Age, smoking, and Charlson Comorbidity Index were not significant independent TR predictors. A nomogram was generated to predict probability of TR at 1, 2, and 3 years. Conclusions: In this VA population of localized PC patients treated from 2011-2016, TR following the use of ADT-RT was variable. Using pre-treatment T levels, ADT duration, BMI, and race, a predictive nomogram can estimate the likelihood of TR.


2016 ◽  
Vol 184 (9) ◽  
pp. 621-632 ◽  
Author(s):  
Kelly R. Evenson ◽  
Fang Wen ◽  
Amy H. Herring

Abstract The US physical activity (PA) recommendations were based primarily on studies in which self-reported data were used. Studies that include accelerometer-assessed PA and sedentary behavior can contribute to these recommendations. In the present study, we explored the associations of PA and sedentary behavior with all-cause and cardiovascular disease (CVD) mortality in a nationally representative sample. Among the 2003–2006 National Health and Nutrition Examination Survey cohort, 3,809 adults 40 years of age or older wore an accelerometer for 1 week and self-reported their PA levels. Mortality data were verified through 2011, with an average of 6.7 years of follow-up. We used Cox proportional hazards models to obtain adjusted hazard ratios and 95% confidence intervals. After excluding the first 2 years, there were 337 deaths (32% or 107 of which were attributable to CVD). Having higher accelerometer-assessed average counts per minute was associated with lower all-cause mortality risk: When compared with the first quartile, the adjusted hazard ratio was 0.37 (95% confidence interval: 0.23, 0.59) for the fourth quartile, 0.39 (95% confidence interval: 0.27, 0.57) for the third quartile, and 0.60 (95% confidence interval: 0.45, 0.80) second quartile. Results were similar for CVD mortality. Lower all-cause and CVD mortality risks were also generally observed for persons with higher accelerometer-assessed moderate and moderate-to-vigorous PA levels and for self-reported moderate-to-vigorous leisure, household and total activities, as well as for meeting PA recommendations. Accelerometer-assessed sedentary behavior was generally not associated with all-cause or CVD mortality in fully adjusted models. These findings support the national PA recommendations to reduce mortality.


JAMIA Open ◽  
2020 ◽  
Author(s):  
Spiros Denaxas ◽  
Anoop D Shah ◽  
Bilal A Mateen ◽  
Valerie Kuan ◽  
Jennifer K Quint ◽  
...  

Abstract Objectives The UK Biobank (UKB) is making primary care electronic health records (EHRs) for 500 000 participants available for COVID-19-related research. Data are extracted from four sources, recorded using five clinical terminologies and stored in different schemas. The aims of our research were to: (a) develop a semi-supervised approach for bootstrapping EHR phenotyping algorithms in UKB EHR, and (b) to evaluate our approach by implementing and evaluating phenotypes for 31 common biomarkers. Materials and Methods We describe an algorithmic approach to phenotyping biomarkers in primary care EHR involving (a) bootstrapping definitions using existing phenotypes, (b) excluding generic, rare, or semantically distant terms, (c) forward-mapping terminology terms, (d) expert review, and (e) data extraction. We evaluated the phenotypes by assessing the ability to reproduce known epidemiological associations with all-cause mortality using Cox proportional hazards models. Results We created and evaluated phenotyping algorithms for 31 biomarkers many of which are directly related to COVID-19 complications, for example diabetes, cardiovascular disease, respiratory disease. Our algorithm identified 1651 Read v2 and Clinical Terms Version 3 terms and automatically excluded 1228 terms. Clinical review excluded 103 terms and included 44 terms, resulting in 364 terms for data extraction (sensitivity 0.89, specificity 0.92). We extracted 38 190 682 events and identified 220 978 participants with at least one biomarker measured. Discussion and conclusion Bootstrapping phenotyping algorithms from similar EHR can potentially address pre-existing methodological concerns that undermine the outputs of biomarker discovery pipelines and provide research-quality phenotyping algorithms.


Cancers ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 1177
Author(s):  
In Young Choi ◽  
Sohyun Chun ◽  
Dong Wook Shin ◽  
Kyungdo Han ◽  
Keun Hye Jeon ◽  
...  

Objective: To our knowledge, no studies have yet looked at how the risk of developing breast cancer (BC) varies with changes in metabolic syndrome (MetS) status. This study aimed to investigate the association between changes in MetS and subsequent BC occurrence. Research Design and Methods: We enrolled 930,055 postmenopausal women aged 40–74 years who participated in a biennial National Health Screening Program in 2009–2010 and 2011–2012. Participants were categorized into four groups according to change in MetS status during the two-year interval screening: sustained non-MetS, transition to MetS, transition to non-MetS, and sustained MetS. We calculated multivariable-adjusted hazard ratios (aHRs) and 95% confidence intervals (CIs) for BC incidence using the Cox proportional hazards models. Results: At baseline, MetS was associated with a significantly increased risk of BC (aHR 1.11, 95% CI 1.06–1.17) and so were all of its components. The risk of BC increased as the number of the components increased (aHR 1.46, 95% CI 1.26–1.61 for women with all five components). Compared to the sustained non-MetS group, the aHR (95% CI) for BC was 1.11 (1.04–1.19) in the transition to MetS group, 1.05 (0.96–1.14) in the transition to non-MetS group, and 1.18 (1.12–1.25) in the sustained MetS group. Conclusions: Significantly increased BC risk was observed in the sustained MetS and transition to MetS groups. These findings are clinically meaningful in that efforts to recover from MetS may lead to reduced risk of BC.


Author(s):  
Laurie Grieshober ◽  
Stefan Graw ◽  
Matt J. Barnett ◽  
Gary E. Goodman ◽  
Chu Chen ◽  
...  

Abstract Purpose The neutrophil-to-lymphocyte ratio (NLR) is a marker of systemic inflammation that has been reported to be associated with survival after chronic disease diagnoses, including lung cancer. We hypothesized that the inflammatory profile reflected by pre-diagnosis NLR, rather than the well-studied pre-treatment NLR at diagnosis, may be associated with increased mortality after lung cancer is diagnosed in high-risk heavy smokers. Methods We examined associations between pre-diagnosis methylation-derived NLR (mdNLR) and lung cancer-specific and all-cause mortality in 279 non-small lung cancer (NSCLC) and 81 small cell lung cancer (SCLC) cases from the β-Carotene and Retinol Efficacy Trial (CARET). Cox proportional hazards models were adjusted for age, sex, smoking status, pack years, and time between blood draw and diagnosis, and stratified by stage of disease. Models were run separately by histotype. Results Among SCLC cases, those with pre-diagnosis mdNLR in the highest quartile had 2.5-fold increased mortality compared to those in the lowest quartile. For each unit increase in pre-diagnosis mdNLR, we observed 22–23% increased mortality (SCLC-specific hazard ratio [HR] = 1.23, 95% confidence interval [CI]: 1.02, 1.48; all-cause HR = 1.22, 95% CI 1.01, 1.46). SCLC associations were strongest for current smokers at blood draw (Interaction Ps = 0.03). Increasing mdNLR was not associated with mortality among NSCLC overall, nor within adenocarcinoma (N = 148) or squamous cell carcinoma (N = 115) case groups. Conclusion Our findings suggest that increased mdNLR, representing a systemic inflammatory profile on average 4.5 years before a SCLC diagnosis, may be associated with mortality in heavy smokers who go on to develop SCLC but not NSCLC.


2020 ◽  
pp. 073346482096720
Author(s):  
Woojung Lee ◽  
Shelly L. Gray ◽  
Douglas Barthold ◽  
Donovan T. Maust ◽  
Zachary A. Marcum

Informants’ reports can be useful in screening patients for future risk of dementia. We aimed to determine whether informant-reported sleep disturbance is associated with incident dementia, whether this association varies by baseline cognitive level and whether the severity of informant-reported sleep disturbance is associated with incident dementia among those with sleep disturbance. A longitudinal retrospective cohort study was conducted using the uniform data set collected by the National Alzheimer’s Coordinating Center. Older adults without dementia at baseline living with informants were included in analysis. Cox proportional hazards models showed that participants with an informant-reported sleep disturbance were more likely to develop dementia, although this association may be specific for older adults with normal cognition. In addition, older adults with more severe sleep disturbance had a higher risk of incident dementia than those with mild sleep disturbance. Informant-reported information on sleep quality may be useful for prompting cognitive screening.


Sign in / Sign up

Export Citation Format

Share Document