scholarly journals Dose–Response Between Serum Prealbumin and All-Cause Mortality After Hepatectomy in Patients With Hepatocellular Carcinoma

2021 ◽  
Vol 10 ◽  
Author(s):  
Rong-Rui Huo ◽  
Hao-Tian Liu ◽  
Zhu-Jian Deng ◽  
Xiu-Mei Liang ◽  
Wen-Feng Gong ◽  
...  

BackgroundThe relationship between serum prealbumin and the risk of all-cause mortality after hepatectomy in patients with hepatocellular carcinoma (HCC) needs to be evaluated.MethodsWe conducted a retrospective study. A Cox proportional hazards regression model was used to adjust for potential confounders. Prealbumin level was transformed by Z-scores and categorized into quartiles (Q1: <147 mg/L, Q2: 147–194 mg/L, Q3: 194–239 mg/L, Q4: >239 mg/L). We assessed the dose-response relationship between serum prealbumin and the risk of all-cause mortality using a restricted cubic spline model.ResultsData were included from 2,022 HCC patients who underwent hepatectomy at Guangxi Medical University Cancer Hospital in China between January 2006 and January 2016. The adjusted hazard ratios (HRs) for increasing quartiles of serum prealbumin were 0.78 [95% confidence interval (CI): 0.64–0.95] for Q2, 0.66 (0.53–0.81) for Q3, and 0.51 (0.41–0.64) for Q4 in the Cox model (all P < 0.001). Serum prealbumin showed an L-shaped, non-linear dose-response relationship with the risk of all-cause mortality (P < 0.001). Among patients whose serum prealbumin was below 250 mg/L, risk of all-cause mortality decreased by 27% (95% CI: 18–36%) per increase of one standard deviation (69.8 mg/L) in serum prealbumin.ConclusionsLevels of serum prealbumin under 250 mg/L may be considered dangerous with respect to all-cause mortality after hepatectomy in HCC patients. Serum prealbumin may be useful as a prognostic marker in HCC patients undergoing hepatectomy.

2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Yusaku Hashimoto ◽  
Takahiro Imaizumi ◽  
Akihiro Hori ◽  
Sawako Kato ◽  
Yoshinari Yasuda ◽  
...  

Abstract Background and Aims Drinking habits are one of the most important modifiable lifestyle factors to prevent the development of chronic kidney disease (CKD). Previous studies showed that it was inversely associated with the risk of developing CKD, but the dose-response relationship between alcohol consumption and the development of CKD is still controversial. In the present study, we aimed to examine whether the amount of alcohol consumed at one time is associated with new onset of CKD in general population. Method Study subjects were 11,162 Japanese aged 45 to 74 years, with an estimated glomerular filtration rate ≥60 mL/min/1.73m2, no proteinuria, no past history of cardiovascular disease, COPD or liver disease. The drinking status was obtained by self-administered questionnaires. We categorized the study subjects into four groups based on the amount of alcohol consumption: <20g/time of ethanol equivalent (lowest); 20-40g/time (low intermediate); 40-60g/time (high intermediate); >60g/time (highest). We set non-drinkers as a reference category. The primary outcome was the incidence of CKD, defined as 25% reduction of eGFR and to less than 60 mL/min/1.73 m2 and/or a dipstick urinalysis score of 1+ or greater (equivalent to ≥30 mg/dL) during the follow up period. We employed Cox proportional hazards regression models to examine the dose-response relationship between baseline alcohol consumption and the risk of CKD. Trend tests were performed using Cox proportional hazards regression models that treated alcohol consumption as a continuous linear term. Results Lowest and low intermediate groups were significantly associated with a decreased risk of CKD (hazard ratio [HR] 0.84; 95% confidence interval [CI], and 0.71–0.99; HR 0.79; 95% CI, 0.66–0.96, respectively) compared to non-drinkers. High intermediate group was associated with a decreased risk of CKD (HR 0.92; 95% CI, 0.70–1.21), and highest group was associated with an increased risk of CKD (HR 1.28; 95% CI, 0.84–1.95), but these associations did not reach statistical significance. There was no dose-response relationship between baseline alcohol consumption and risk of CKD (P-trend = 0.30). Conclusion A J-shape association was observed between self-reported alcohol intake and the incidence of CKD. Moderate alcohol consumption at one time may help reduce the risk of CKD.


Hypertension ◽  
2020 ◽  
Vol 76 (Suppl_1) ◽  
Author(s):  
Anwar Alnakhli ◽  
Richard Shaw ◽  
Daniel Smith ◽  
Sandosh Padmanabhan

Background: Recent theory suggests that antihypertensive medications may be useful as repurposed treatments for mood disorders, however, empirical evidence is inconsistent Objective: We aimed to assess the risk of depression incidence as indicated by first-ever prescription of antidepressant in patients newly exposed to antihypertensive monotherapy and whether there is a dose-response relationship. Method: This study enrolled 2406 new users of antihypertensive monotherapy aged between 18 and 80 years with no previous history of antidepressant prescriptions. The exposure period (EP) to antihypertensive medication was fixed at one year starting from the first date of antihypertensive prescription between Jan 2005 and Mar 2012 and extended up to 12 months. Follow-up commence after the EP until March 2013. To test for dose-response relationship the cumulative defined daily dose (cDDD) of antihypertensive during the EP were stratified into tertiles. Cox proportional hazards models were used to estimate hazard ratios (HR) for depression incidence. Results: Among the five major classes of antihypertensive medications, calcium channel blocker (CCB) had the highest risk of developing depression after adjusting for covariates (HR = 1.40 95%CI 1.11,1.78) compared to angiotensin-converting enzyme inhibitor (ACEI). Angiotensin-receptor blocker (ARB) treatment showed higher risk of depression incidence with tertile 2(HR= 1.46, 95%CI 0.88,2.44) and tertile 3 (HR= 1.75, 95%CI 1.03,2.97) compared to tertile 1 of cDDD. Conclusion: Our findings confirmed previous evidence suggesting that CCB is associated with increased risk of depression incidence compared to ACEI. Risk of developing depression is also linked to ARB, though it might be dose dependent.


2001 ◽  
Vol 51 (3) ◽  
pp. 257-258 ◽  
Author(s):  
H.C. Park ◽  
J. Seong ◽  
K.H. Han ◽  
C.Y. Chon ◽  
Y.M. Moon ◽  
...  

2018 ◽  
Vol 33 (10) ◽  
pp. 1823-1831 ◽  
Author(s):  
Mengjing Wang ◽  
Yoshitsugu Obi ◽  
Elani Streja ◽  
Connie M Rhee ◽  
Jing Chen ◽  
...  

ABSTRACTBackgroundBoth dialysis dose and residual kidney function (RKF) contribute to solute clearance and are associated with outcomes in hemodialysis patients. We hypothesized that the association between dialysis dose and mortality is attenuated with greater RKF.MethodsAmong 32 251 incident hemodialysis patients in a large US dialysis organization (2007–11), we examined the interaction between single-pool Kt/V (spKt/V) and renal urea clearance (rCLurea) levels in survival analyses using multivariable Cox proportional hazards regression model.ResultsThe median rCLurea and mean baseline spKt/V were 3.06 [interquartile range (IQR) 1.74–4.85] mL/min/1.73 m2 and 1.32 ± 0.28, respectively. A total of 7444 (23%) patients died during the median follow-up of 1.2 years (IQR 0.5–2.2 years) with an incidence of 15.4 deaths per 100 patient-years. The Cox model with adjustment for case-mix and laboratory variables showed that rCLurea modified the association between spKt/V and mortality (Pinteraction = 0.03); lower spKt/V was associated with higher mortality among patients with low rCLurea (i.e. <3  mL/min/1.73 m2) but not among those with higher rCLurea. The adjusted mortality hazard ratios (aHRs) and 95% confidence intervals of the low (<1.2) versus high (≥1.2) spKt/V were 1.40 (1.12–1.74), 1.21 (1.10–1.33), 1.06 (0.98–1.14), and 1.00 (0.93–1.08) for patients with rCLurea of 0.0, 1.0, 3.0 and 6.0 mL/min/1.73 m2, respectively.ConclusionsIncident hemodialysis patients with substantial RKF do not exhibit the expected better survival at higher hemodialysis doses. RKF levels should be taken into account when deciding on the dose of dialysis treatment among incident hemodialysis patients.


2019 ◽  
Vol 188 (7) ◽  
pp. 1371-1382 ◽  
Author(s):  
Henry T Zhang ◽  
Leah J McGrath ◽  
Alan R Ellis ◽  
Richard Wyss ◽  
Jennifer L Lund ◽  
...  

Abstract Nonexperimental studies of the effectiveness of seasonal influenza vaccine in older adults have found 40%–60% reductions in all-cause mortality associated with vaccination, potentially due to confounding by frailty. We restricted our cohort to initiators of medications in preventive drug classes (statins, antiglaucoma drugs, and β blockers) as an approach to reducing confounding by frailty by excluding frail older adults who would not initiate use of these drugs. Using a random 20% sample of US Medicare beneficiaries, we framed our study as a series of nonrandomized “trials” comparing vaccinated beneficiaries with unvaccinated beneficiaries who had an outpatient health-care visit during the 5 influenza seasons occurring in 2010–2015. We pooled data across trials and used standardized-mortality-ratio–weighted Cox proportional hazards models to estimate the association between influenza vaccination and all-cause mortality before influenza season, expecting a null association. Weighted hazard ratios among preventive drug initiators were generally closer to the null than those in the nonrestricted cohort. Restriction of the study population to statin initiators with an uncensored approach resulted in a weighted hazard ratio of 1.00 (95% confidence interval: 0.84, 1.19), and several other hazard ratios were above 0.95. Restricting the cohort to initiators of medications in preventive drug classes can reduce confounding by frailty in this setting, but further work is required to determine the most appropriate criteria to use.


2015 ◽  
Vol 33 (3_suppl) ◽  
pp. 443-443
Author(s):  
Kerry Schaffer ◽  
Marcus Smith Noel ◽  
Aram F. Hezel ◽  
Alan W. Katz ◽  
Ashwani Sharma ◽  
...  

443 Background: Local-regional radioembolization with Yitrium-90 (Y-90) has become standard practice for patients with hepatocellular carcinoma (HCC) either as a bridge to transplant, or for local disease control. Outcomes data in the United States are limited and here we review our institutional experience with Y-90 radioembolization. Methods: We retrospectively reviewed charts from 70 patients with HCC who were treated with Y-90 from May 2010- January 2014. Clinical variables including Child-Pugh class and CLIP score were extracted from patient records. The Cox proportional hazards model was used to determine prognostic factors, and Kaplan-Meier curves were used to determine PFS and OS. Results: Median age was 61 (range 43-82), 79% Caucasian, 84% male, and 79% Child-Pugh class A. Median progression free survival (PFS) was 8.4 months (95% CI 6-10.7) and overall survival (OS) was 14.2 months (95% CI 9.7-21). Overall survival significantly differed by Child -Pugh score (p= 0.009), CLIP score (p=0.003), and presence of portal vein thrombosis (PVT) (p=0.0384), based on the log-rank test comparing Kaplan-Meier curves. Using univariate Cox proportional hazards models, both elevated baseline AFP, measured on a log scale (HR 1.79, 95% CI 1.32-2.43, p=0.0002) and post Y-90 treatment with sorafenib (HR=2.30, 95% CI 1.07-4.95, p=0.03) were associated with worse mortality. Elevated AFP (HR 2.45, 95% CI 1.73-3.47, p<0.0001) and Child-Pugh score of B (HR 4.83, 95% CI 2.23-10.43, p<0.0001) were associated with worse mortality in a multivariate Cox model adjusting for age and ethnicity. Furthermore, AFP values were significantly higher in the 10 patients who died within 4 months of Y-90 (p=0.001), and significantly lower in 7 patients who eventually received a liver transplant (p=0.0002). Conclusions: In patients undergoing treatment with Y-90 radioembolization, Child-Pugh class, CLIP score, presence of PVT, baseline AFP, and sorafenib post Y-90 were significantly associated with overall survival. Median PFS and OS data in this institutional cohort are encouraging. Further prospective studies on Y-90 treatment for HCC are warranted.


Author(s):  
Hee Chul Park ◽  
Jinsil Seong ◽  
Kwang Hyub Han ◽  
Chae Yoon Chon ◽  
Young Myoung Moon ◽  
...  

Author(s):  
Jingyi Lu ◽  
Chunfang Wang ◽  
Jinghao Cai ◽  
Yun Shen ◽  
Lei Chen ◽  
...  

Abstract Context The interaction of glycated hemoglobin A1c (HbA1c) and glycemic variability in relation to diabetes-related outcomes remains unknown. Objective To evaluate the relationship between HbA1c and all-cause mortality across varying degrees of glycemic variability in patients with type 2 diabetes. Design, Setting, and Patients This was a prospective study conducted in a single referral center. Data of 6090 hospitalized patients with type 2 diabetes was analyzed. Glucose coefficient of variation (CV) was obtained as the measure of glycemic variability by using continuous glucose monitoring (CGM) for 3 days. Cox proportional hazards regression models were used to estimate hazard ratios and 95% CIs for all-cause mortality. Results During a median follow-up of 6.8 years, 815 patients died. In patients with the lowest and middle tertiles of glucose CV, HbA1c ≥8.0% was associated with 136% (95%CI 1.46-3.81) and 92% (95%CI 1.22-3.03) higher risks of all-cause mortality as compared with HbA1c 6.0-6.9%, respectively, after adjusting for confounders. However, a null association of HbA1c with mortality was found in patients with the highest tertile of glucose CV. Conclusions HbA1c may not be a robust marker of all-cause mortality in patients with high degree of glycemic variability. New metrics of glycemic control may be needed in these individuals to achieve better diabetes management.


Sign in / Sign up

Export Citation Format

Share Document