scholarly journals The Influence of Donor and Recipient Complement C3 Polymorphisms on Liver Transplant Outcome

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Maria Pires ◽  
James Underhill ◽  
Abdel Douiri ◽  
Alberto Quaglia ◽  
Wayel Jassem ◽  
...  

Despite early reports of an impact of complement C3 polymorphism on liver transplant patient and graft survival, subsequent evidence has been conflicting. Our aim was to clarify the contributions of donor and recipient C3 genotype, separately and together, on patient and graft outcomes and acute rejection incidence in liver transplant recipients. Eight donor/recipient groups were analyzed according to their genotype and presence or absence of C3 F allele (FFFS, FFSS, FSFF, FSFS, FSSS, SSFF, SSFS, and SSSS) and correlated with clinical outcomes of patient survival, graft survival, and rejection. The further impact of brain death vs. circulatory death during liver donation was also considered. Over a median 5.3 y follow-up of 506 patients with clinical information and matching donor and recipient tissue, five-year patient and graft survival (95% confidence interval) were 90(81-91)% and 77(73-85)%, respectively, and 72(69-94)% were rejection-free. Early disadvantages to patient survival were associated with donor C3 F variant, especially in brain-death donors. Recipient C3 genotype was an independent determinant of graft survival by Cox proportional hazards analysis (hazard ratio 0.26, P = 0.04 ), and the C3 F donor variant was again associated with worse liver graft survival, particularly in brain-death donors. C3 genotype did not independently determine rejection incidence, but a greater proportion of recipient C3 F carriers were rejection-free in the circulatory death, but not the brain-death cohort. Cox proportional hazards analysis revealed significant effects of acute rejection on patient survival (hazard ratio 0.24, P = 0.018 ), of retransplantation on rejection risk (hazard ratio 6.3, P = 0.009 ), and of donor type (circulatory-death vs. brain-death) on rejection incidence (hazard ratio 4.9, P = 0.005 ). We conclude that both donor and recipient complement C3 genotype may influence patient and graft outcomes after liver transplantation but that the type of liver donor is additionally influential, possibly via the inflammatory environment of the transplant.

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0244744
Author(s):  
Paul J. Thuluvath ◽  
Waseem Amjad ◽  
Talan Zhang

Background and objectives Hispanics are the fastest growing population in the USA, and our objective was to determine their waitlist mortality rates, liver transplantation (LT) rates and post-LT outcomes. Methods All adults listed for LT with the UNOS from 2002 to 2018 were included. Competing risk analysis was performed to assess the association between ethnic group with waitlist removal due to death/deterioration and transplantation. For sensitivity analysis, Hispanics were matched 1:1 to Non-Hispanics using propensity scores, and outcomes of interest were compared in matched cohort. Results During this period, total of 154,818 patients who listed for liver transplant were involved in this study, of them 23,223 (15%) were Hispanics, 109,653 (71%) were Whites, 13,020 (8%) were Blacks, 6,980 (5%) were Asians and 1,942 (1%) were others. After adjusting for differences in clinical characteristics, compared to Whites, Hispanics had higher waitlist removal due to death or deterioration (adjusted cause-specific Hazard Ratio: 1.034, p = 0.01) and lower transplantation rates (adjusted cause-specific Hazard Ratio: 0.90, p<0.001). If Hispanics received liver transplant, they had better patient and graft survival than Non-Hispanics (p<0.001). Compared to Whites, adjusted hazard ratio for Hispanics were 0.88 (95% CI 0.84, 0.92, p<0.001) for patient survival and 0.90 (95% CI 0.86, 0.94, p<0.001) for graft survival. Our analysis in matched cohort showed the consistent results. Conclusions This study showed that Hispanics had higher probability to be removed from the waitlist due to death, and lower probability to be transplanted, however they had better post-LT outcomes when compared to whites.


2021 ◽  
Author(s):  
Anisa Nutu ◽  
Iago Justo ◽  
Alberto Marcacuzco ◽  
Óscar Caso ◽  
Alejandro Manrique ◽  
...  

Abstract Controversy exists regarding whether the rate of hepatocellular carcinoma (HCC) recurrence after orthotopic liver transplantation (OLT) differs when using livers from donation after controlled circulatory death (DCD) versus livers from donation after brain death (DBD). The aim of this cohort study was to analyze rates of HCC recurrence, patient survival, and graft survival after OLT for HCC, comparing recipients of DBD livers (n=103) with recipients of uncontrolled DCD livers (uDCD; n=41). No significant differences in tumor size, tumor number, serum alpha-fetoprotein, proportion of patients within Milan criteria, or pre-OLT bridging therapies were identified between groups, although the waitlist period was significantly shorter in the uDCD group (p=0.04). HCC recurrence was similar between groups. Patient survival was similar between groups, but graft survival was lower in the uDCD group. Multivariate analysis identified recipient age (p=0.03), pre-OLT bridging therapy (p=0.02), and HCC recurrence (p=0.04) as independent risk factors for patient survival and pre-OLT transarterial chemoembolization (p=0.04) as the single risk factor for HCC recurrence. In conclusion, similar patient survival and lower graft survival were observed in the uDCD group. However, the use of uDCD livers appears to be justified due to shorter waitlist time and similar HCC recurrence in both groups.


2021 ◽  
pp. 000486742110096
Author(s):  
Oleguer Plana-Ripoll ◽  
Patsy Di Prinzio ◽  
John J McGrath ◽  
Preben B Mortensen ◽  
Vera A Morgan

Introduction: An association between schizophrenia and urbanicity has long been observed, with studies in many countries, including several from Denmark, reporting that individuals born/raised in densely populated urban settings have an increased risk of developing schizophrenia compared to those born/raised in rural settings. However, these findings have not been replicated in all studies. In particular, a Western Australian study showed a gradient in the opposite direction which disappeared after adjustment for covariates. Given the different findings for Denmark and Western Australia, our aim was to investigate the relationship between schizophrenia and urbanicity in these two regions to determine which factors may be influencing the relationship. Methods: We used population-based cohorts of children born alive between 1980 and 2001 in Western Australia ( N = 428,784) and Denmark ( N = 1,357,874). Children were categorised according to the level of urbanicity of their mother’s residence at time of birth and followed-up through to 30 June 2015. Linkage to State-based registers provided information on schizophrenia diagnosis and a range of covariates. Rates of being diagnosed with schizophrenia for each category of urbanicity were estimated using Cox proportional hazards models adjusted for covariates. Results: During follow-up, 1618 (0.4%) children in Western Australia and 11,875 (0.9%) children in Denmark were diagnosed with schizophrenia. In Western Australia, those born in the most remote areas did not experience lower rates of schizophrenia than those born in the most urban areas (hazard ratio = 1.02 [95% confidence interval: 0.81, 1.29]), unlike their Danish counterparts (hazard ratio = 0.62 [95% confidence interval: 0.58, 0.66]). However, when the Western Australian cohort was restricted to children of non-Aboriginal Indigenous status, results were consistent with Danish findings (hazard ratio = 0.46 [95% confidence interval: 0.29, 0.72]). Discussion: Our study highlights the potential for disadvantaged subgroups to mask the contribution of urban-related risk factors to risk of schizophrenia and the importance of stratified analysis in such cases.


1998 ◽  
Vol 47 (2) ◽  
pp. 128-135 ◽  
Author(s):  
Rafat S. Rizk ◽  
John P. McVicar ◽  
Mary J. Emond ◽  
Charles A. Rohrmann ◽  
Kris V. Kowdley ◽  
...  

Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Samuel T Kim ◽  
Mark R Helmers ◽  
Peter Altshuler ◽  
Amit Iyengar ◽  
Jason Han ◽  
...  

Introduction: Although guidelines for heart transplant currently recommend against donors weighing ≥ 30% less than the recipient, recent studies have shown that the detriment of under-sizing may not be as severe in obese recipients. Furthermore, predicted heart mass (PHM) has been shown to be more reliable for size matching compared to metrics such as weight and body surface area. In this study, we use PHM to characterize the effects of undersized heart transplantation (UHT) in obese vs. non-obese recipients. Methods: Retrospective analysis of the UNOS database was performed for heart transplants from Jan. 1995 to Sep. 2020. Recipients were stratified by obese (BMI ≥ 30) and non-obese (30 > BMI ≥ 18.5). Undersized donors were defined as PHM ≥ 20% less than recipient PHM. Obese and non-obese populations separately underwent propensity score matching, and Kaplan-Meier estimates were used to graph survival. Multivariable Cox proportional-hazards analyses were used to adjust for confounders and estimate the hazard ratio for death attributable to under-sizing. Results: Overall, 50,722 heart transplants were included in the analysis. Propensity-score matching resulted in 2,214, and 1,011 well-matched pairs, respectively, for non-obese and obese populations. UHT in non-obese recipients resulted in similar 30-day mortality (5.7% vs. 6.3%, p = 0.38), but worse 15-year survival (38% vs. 35%, P = 0.04). In contrast, obese recipients with UHT saw similar 30-day mortality (6.4% vs. 5.5%, p = 0.45) and slightly increased 15-year survival (31% vs. 35%, P = 0.04). Multivariate Cox analysis showed that UHT resulted in an adjusted hazard ratio of 1.08 (95% CI 1.01 - 1.16) in non-obese recipients, and 0.87 (95% CI 0.78 - 0.98) in obese recipients. Conclusions: Non-obese patients with UHT saw worse long-term survival, while obese patients with UHT saw slightly increased survival. These findings may warrant reevaluation of the current size criteria for obese patients awaiting a heart.


2019 ◽  
Author(s):  
Inger van Heijl ◽  
Valentijn A. Schweitzer ◽  
C.H. Edwin Boel ◽  
Jan Jelrik Oosterheert ◽  
Susanne M. Huijts ◽  
...  

BackgroundObservational studies have demonstrated that de-escalation of antimicrobial therapy is independently associated with lower mortality. This most probably results from confounding by indication. Reaching clinical stability is associated with the decision to de-escalate and with survival. However, studies rarely adjust for this confounder. We quantified the potential confounding effect of clinical stability on the estimated impact of de-escalation on mortality in patients with community-acquired pneumonia.MethodsData were used from the Community-Acquired Pneumonia immunization Trial in Adults (CAPiTA). The primary outcome was 30-day mortality. We performed Cox proportional-hazards regression with de-escalation as time-dependent variable and adjusted for baseline characteristics using propensity scores. The potential impact of unmeasured confounding was quantified through simulating a variable representing clinical stability on day three, using data on prevalence and associations with mortality from the literature.ResultsOf 1,536 included patients, 257 (16.7%) were de-escalated, 123 (8.0%) were escalated and in 1156 (75.3%) the antibiotic spectrum remained unchanged. The adjusted hazard ratio of de-escalation for 30-day mortality (compared to patients with unchanged coverage), without adjustment for clinical stability, was 0.36 (95%CI: 0.18-0.73). If 90% to 100% of de-escalated patients were clinically stable on day three, the fully adjusted hazard ratio would be 0.53 (95%CI: 0.26-1.08) to 0.90 (95%CI: 0.42-1.91), respectively. The simulated confounder was substantially stronger than any of the baseline confounders in our dataset.ConclusionsWith plausible, literature-based assumptions, clinical stability is a very strong confounder for the effects of de-escalation. Quantification of effects of de-escalation on patient outcomes without proper adjustment for clinical stability results in strong negative bias. As a result, the safety of de-escalation remains to be determined.


Neurosurgery ◽  
2015 ◽  
Vol 77 (6) ◽  
pp. 880-887 ◽  
Author(s):  
Eric J. Heyer ◽  
Joanna L. Mergeche ◽  
Shuang Wang ◽  
John G. Gaudet ◽  
E. Sander Connolly

BACKGROUND: Early cognitive dysfunction (eCD) is a subtle form of neurological injury observed in ∼25% of carotid endarterectomy (CEA) patients. Statin use is associated with a lower incidence of eCD in asymptomatic patients having CEA. OBJECTIVE: To determine whether eCD status is associated with worse long-term survival in patients taking and not taking statins. METHODS: This is a post hoc analysis of a prospective observational study of 585 CEA patients. Patients were evaluated with a battery of neuropsychometric tests before and after surgery. Survival was compared for patients with and without eCD stratifying by statin use. At enrollment, 366 patients were on statins and 219 were not. Survival was assessed by using Kaplan-Meier methods and multivariable Cox proportional hazards models. RESULTS: Age ≥75 years (P = .003), diabetes mellitus (P &lt; .001), cardiac disease (P = .02), and statin use (P = .014) are significantly associated with survival univariately (P &lt; .05) by use of the log-rank test. By Cox proportional hazards model, eCD status and survival adjusting for univariate factors within statin and nonstatin use groups suggested a significant effect by association of eCD on survival within patients not taking statin (hazard ratio, 1.61; 95% confidence interval, 1.09–2.40; P = .018), and no significant effect of eCD on survival within patients taking statin (hazard ratio, 0.98; 95% confidence interval, 0.59–1.66; P = .95). CONCLUSION: eCD is associated with shorter survival in patients not taking statins. This finding validates eCD as an important neurological outcome and suggests that eCD is a surrogate measure for overall health, comorbidity, and vulnerability to neurological insult.


2021 ◽  
Vol 75 (4) ◽  
pp. 311-322
Author(s):  
Irena Míková ◽  
Denisa Kyselová ◽  
Dana Kautznerová ◽  
Marek Tupý ◽  
Marek Kysela ◽  
...  

Introduction: Sarcopenia (severe muscle depletion) and myosteaosis (pathological fat accumulation in muscle) are frequent muscle abnormalities in patients with cirrhosis associated with unfavorable prognosis. The aim of our study was to evaluate the impact of sarcopenia and myosteatosis in liver transplant (LT) candidates in our center on the peritransplant course and patient and graft survival. Methods: This prospective study included adult LT candidates who underwent clinical and laboratory examination. The skeletal muscle index (SMI) at L3 level and radiodensity of psoas major muscle (PM-RA) were evaluated by CT. Results: Pretransplant sarcopenia was found in 49 of 103 patients (47.6%) and myosteatosis in 53 (51.5%) patients. Patients with sarcopenia had lower BMI, waist circumference, occurrence of hypertension and metabolic syndrome and lower triglyceride and C-peptide levels than patients without sarcopenia. Patients with myosteatosis had higher Child-Pugh score and lower HDL-cholesterol levels than patients without myosteatosis. Pretransplant SMI negatively correlated with the amount of blood transfusions given during LT and occurrence of biliary complications. Patients with myosteatosis had higher need for blood transfusions during LT and after LT, and higher number of surgical revisions. Occurrence of sarcopenia had no significant effect on patient and graft survival. Patients with myosteatosis had worse long-term survival than patients without myosteatosis, the graft survival did not differ. Conclusion: Sarcopenia and myosteatosis are frequent muscle abnormalities in LT candidates with negative impact on peritransplant course. Myosteatosis was associated with a worse long-term survival in our study. Key words: sarcopenia – myosteatosis – liver transplantation – prevalence – complications – survival


Circulation ◽  
2017 ◽  
Vol 135 (suppl_1) ◽  
Author(s):  
Gabriel S Tajeu ◽  
Monika M Safford ◽  
George Howard ◽  
Rikki M Tanner ◽  
Paul Muntner

Introduction: Black Americans have higher rates of cardiovascular disease (CVD) mortality compared with whites. Differences in sociodemographic, psychosocial, CVD, and other risk factors may explain increased mortality risk. Methods: We analyzed data from 29,015 REasons for Geographic and Racial Differences in Stroke study participants to determine factors that may explain the higher hazard ratio for CVD and non-CVD mortality in blacks compared with whites. Cause of death was adjudicated by trained investigators. Within age-sex sub-groups, we used Cox proportional hazards regression with progressive adjustment to estimate black:white hazard ratios. Results: Overall, 41.0% of participants were black, and 54.9% were women. Over a mean follow-up of 7.1 years (maximum 12.3 years), 5,299 participants died (1,797 CVD and 3,502 non-CVD deaths). Among participants < 65 years of age, the age and region adjusted black:white hazard ratio for CVD mortality was 2.28 (95% CI: 1.68-3.10) and 2.32 (95% CI: 1.80-3.00) for women and men, respectively, and for participants ≥ 65 was 1.54 (95% CI: 1.30-1.82) and 1.35 (95% CI: 1.16-1.57) for women and men, respectively ( Table ). The higher black:white hazard ratios for CVD mortality were no longer statistically significant after multivariable adjustment, with the largest attenuation occurring with sociodemographic and CVD risk factor adjustment. Among participants < 65 years of age, the age and region adjusted black:white hazard ratios for non-CVD mortality were 1.51 (95% CI: 1.24-1.85) and 1.76 (95% CI: 1.46-2.13) for women and men, respectively, and for participants ≥ 65 was 1.12 (95% CI: 1.00-1.26) and 1.34 (95% CI: 1.20-1.49) for women and men, respectively. The higher black:white hazard ratios for non-CVD mortality were attenuated after adjustment for sociodemographics. Conclusions: Black:white differences are larger for CVD than non-CVD causes of death. The increased CVD mortality for blacks compared with whites is primarily explained by sociodemographic and CVD risk factors.


Author(s):  
Jayeun Kim ◽  
Soong-Nang Jang ◽  
Jae-Young Lim

Background: Hip fracture is one of the significant public concerns in terms of long-term care in aging society. We aimed to investigate the risk for the incidence of hip fracture focusing on disability among older adults. Methods: This was a population-based retrospective cohort study, focusing on adults aged 65 years or over who were included in the Korean National Health Insurance Service–National Sample from 2004 to 2013 (N = 90,802). Hazard ratios with 95% confidence interval (CIs) were calculated using the Cox proportional hazards model according to disability adjusted for age, household income, underlying chronic diseases, and comorbidity index. Results: The incidence of hip fracture was higher among older adults with brain disability (6.3%) and mental disability (7.5%) than among those with other types of disability, as observed during the follow-up period. Risk of hip fracture was higher among those who were mildly to severely disabled (hazard ratio for severe disability = 1.59; 95% CI, 1.33–1.89; mild = 1.68; 95% CI, 1.49–1.88) compared to those who were not disabled. Older men with mental disabilities experienced an incidence of hip fracture that was almost five times higher (hazard ratio, 4.98; 95% CI, 1.86–13.31) versus those that were not disabled. Conclusions: Older adults with mental disabilities and brain disability should be closely monitored and assessed for risk of hip fracture.


Sign in / Sign up

Export Citation Format

Share Document