scholarly journals Intracranial Stenting of Subacute Symptomatic Atherosclerotic Occlusion Versus Stenosis

Stroke ◽  
2011 ◽  
Vol 42 (12) ◽  
pp. 3470-3476 ◽  
Author(s):  
Peng-Hua Lü ◽  
Jee Won Park ◽  
Soonchan Park ◽  
Jong Lim Kim ◽  
Deok Hee Lee ◽  
...  

Background and Purpose— Limited data are available concerning the outcome of angioplasty/stenting for subacute atherosclerotic intracranial artery occlusion, which is often associated with progressive symptom development in the salvageable brain under ischemic threat due to poor collateral blood supply. Methods— Among 177 patients who underwent angioplasty and/or stenting for severe symptomatic intracranial steno-occlusion, 26 had subacute atherosclerotic intracranial artery occlusion. Outcome after stenting (N=22) was assessed according to procedural success (return of antegrade flow and residual stenosis <50%), adverse event (any stroke or death) rate, and restenosis (>50%) using weighted Cox proportional hazards regression in the overall cohort and in separate subgroups. Results— Successful recanalization was achieved in 95%. Three adverse events (13.6%) occurred among patients undergoing stenting for occlusion, including 2 major strokes and 1 nonprocedure-related death. Good outcome (modified Rankin Scale ≤2) was achieved in 73%. In the overall cohort, no significant difference was observed between the occlusion and stenosis groups in terms of the risk of adverse events (hazard ratio for the occlusion group, 1.055; 95% CI, 0.29–3.90) or the risk of restenosis (hazard ratio for the occlusion group, 1.2; 95% CI, 0.19–7.72). A trend toward a higher rate of adverse events was observed in older age (>65 years), progressive worsening, balloon-expandable stent, and no history of a preprocedural P2Y12 assay. Conclusions— In a cohort of patients undergoing angioplasty/stenting for subacute atherosclerotic intracranial artery occlusion, no significant difference in the rates of adverse events was observed. However, several factors, including age, tended to be associated with a higher event rate.

2019 ◽  
Vol 14 (7) ◽  
pp. 994-1001 ◽  
Author(s):  
Eli Farhy ◽  
Clarissa Jonas Diamantidis ◽  
Rebecca M. Doerfler ◽  
Wanda J. Fink ◽  
Min Zhan ◽  
...  

Background and objectivesPoor disease recognition may jeopardize the safety of CKD care. We examined safety events and outcomes in patients with CKD piloting a medical-alert accessory intended to improve disease recognition and an observational subcohort from the same population.Design, setting, participants, & measurementsWe recruited 350 patients with stage 2–5 predialysis CKD. The first (pilot) 108 participants were given a medical-alert accessory (bracelet or necklace) indicating the diagnosis of CKD and displaying a website with safe CKD practices. The subsequent (observation) subcohort (n=242) received usual care. All participants underwent annual visits with ascertainment of patient-reported events (class 1) and actionable safety findings (class 2). Secondary outcomes included 50% GFR reduction, ESKD, and death. Cox proportional hazards assessed the association of the medical-alert accessory with outcomes.ResultsMedian follow-up of pilot and observation subcohorts were 52 (interquartile range, 44–63) and 37 (interquartile range, 27–47) months, respectively. The frequency of class 1 and class 2 safety events reported at annual visits was not different in the pilot versus observation group, with 108.7 and 100.6 events per 100 patient-visits (P=0.13), and 38.3 events and 41.2 events per 100 patient visits (P=0.23), respectively. The medical-alert accessory was associated with lower crude and adjusted rate of ESKD versus the observation group (hazard ratio, 0.42; 95% confidence interval, 0.20 to 0.89; and hazard ratio, 0.38; 95% confidence interval, 0.16 to 0.94, respectively). The association of the medical-alert accessory with the composite endpoint of ESKD or 50% reduction GFR was variable over time but appeared to have an early benefit (up to 23 months) with its use. There was no significant difference in incidence of hospitalization, death, or a composite of all outcomes between medical-alert accessory users and the observational group.ConclusionsThe medical-alert accessory was not associated with incidence of safety events but was associated with a lower rate of ESKD relative to usual care.


2016 ◽  
Vol 126 (6) ◽  
pp. 1756-1763 ◽  
Author(s):  
Michael A. Garcia ◽  
Ann Lazar ◽  
Sai Duriseti ◽  
David R. Raleigh ◽  
Christopher P. Hess ◽  
...  

OBJECTIVEHigh-resolution double-dose gadolinium-enhanced Gamma Knife (GK) radiosurgery-planning MRI (GK MRI) on the day of GK treatment can detect additional brain metastases undiagnosed on the prior diagnostic MRI scan (dMRI), revealing increased intracranial disease burden on the day of radiosurgery, and potentially necessitating a reevaluation of appropriate management. The authors identified factors associated with detecting additional metastases on GK MRI and investigated the relationship between detection of additional metastases and postradiosurgery patient outcomes.METHODSThe authors identified 326 patients who received GK radiosurgery at their institution from 2010 through 2013 and had a prior dMRI available for comparison of numbers of brain metastases. Factors predictive of additional brain metastases on GK MRI were investigated using logistic regression analysis. Overall survival was estimated by Kaplan-Meier method, and postradiosurgery distant intracranial failure was estimated by cumulative incidence measures. Multivariable Cox proportional hazards model and Fine-Gray regression modeling assessed potential risk factors of overall survival and distant intracranial failure, respectively.RESULTSThe mean numbers of brain metastases (SD) on dMRI and GK MRI were 3.4 (4.2) and 5.8 (7.7), respectively, and additional brain metastases were found on GK MRI in 48.9% of patients. Frequencies of detecting additional metastases for patients with 1, 2, 3–4, and more than 4 brain metastases on dMRI were 29.5%, 47.9%, 55.9%, and 79.4%, respectively (p < 0.001). An index brain metastasis with a diameter greater than 1 cm on dMRI was inversely associated with detecting additional brain metastases, with an adjusted odds ratio of 0.57 (95% CI 0.4–0.9, p = 0.02). The median time between dMRI and GK MRI was 22 days (range 1–88 days), and time between scans was not associated with detecting additional metastases. Patients with additional brain metastases did not have larger total radiosurgery target volumes, and they rarely had an immediate change in management (abortion of radiosurgery or addition of whole-brain radiation therapy) due to detection of additional metastases. Patients with additional metastases had a higher incidence of distant intracranial failure than those without additional metastases (p = 0.004), with an adjusted subdistribution hazard ratio of 1.4 (95% CI 1.0–2.0, p = 0.04). Significantly worse overall survival was not detected for patients with additional brain metastases on GK MRI (log-rank p = 0.07), with the relative adjusted hazard ratio of 1.07, (95% CI 0.81–1.41, p = 0.65).CONCLUSIONSDetecting additional brain metastases on GK MRI is strongly associated with the number of brain metastases on dMRI and inversely associated with the size of the index brain metastasis. The discovery of additional brain metastases at time of GK radiosurgery is very unlikely to lead to aborting radiosurgery but is associated with a higher incidence of distant intracranial failure. However, there is not a significant difference in survival.▪ CLASSIFICATION OF EVIDENCE Type of question: prognostic; study design: retrospective cohort trial; evidence: Class IV.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Shingo Kazama ◽  
Ryota Morimoto ◽  
Yuki Kimura ◽  
Naoki Shibata ◽  
Reina Ozaki ◽  
...  

Abstract Background The emergence of immune checkpoint inhibitors (ICIs) has brought about a paradigm shift in cancer treatment as the use of these drugs has become more frequent and for a longer duration. As a result of T-cell-mediated inflammation at the programmed cell death-1, programmed death-ligand-1, and cytotoxic T-lymphocyte antigen-4 pathways, immune-related adverse events (irAEs) occur in various organs and can cause a rare but potentially induced cardiotoxicity. Although irAEs are associated with the efficacy of ICI therapy and better prognosis, there is limited information about the correlation between irAEs and cardiotoxicity and whether the benefits of irAEs apply to patients with underlying cardiovascular disease. This study aimed to investigate the association of irAEs and treatment efficacy in patients undergoing ICI therapy with and without a cardiovascular history. Methods We performed a retrospective review of the medical records of 409 consecutive patients who received ICI therapy from September 2014 to October 2019. Results Median patient age was 69 years (29.6% were female). The median follow-up period was 278 days. In total, 69 (16.9%) patients had a history of any cardiovascular disease and 14 (3.4%) patients experienced cardiovascular irAEs after ICI administration. The rate of cardiovascular irAEs was higher in patients with prior non-cardiovascular irAEs than without. The prognosis of patients with irAEs ( +) was significantly better than that of the patients without irAEs (P < 0.001); additionally, this tendency did not depend on the presence or absence of a cardiovascular history. Furthermore, the Cox proportional hazards analysis revealed that irAEs were an independent predictor of mortality. Conclusions Although cardiovascular irAEs may be related to prior non-cardiovascular irAEs under ICI therapy, the occurrence of irAEs had a better prognostic impact and this tendency was not affected by cardiovascular history.


2021 ◽  
Author(s):  
Shingo Kazama ◽  
Ryota Morimoto ◽  
Yuki Kimura ◽  
Naoki Shibata ◽  
Reina Ozaki ◽  
...  

Abstract Background: The emergence of immune checkpoint inhibitors (ICIs) has brought about a paradigm shift in cancer treatment as the use of these drugs has become more frequent and for a longer duration. As a result of T-cell-mediated inflammation at the programmed cell death-1, programmed death-ligand-1, and cytotoxic T-lymphocyte antigen-4 pathways, immune-related adverse events (irAEs) occur in various organs and can cause a rare but potentially induced cardiotoxicity. Although irAEs are associated with the efficacy of ICI therapy and better prognosis, there is limited information about the correlation between irAEs and cardiotoxicity and whether the benefits of irAEs apply to patients with underlying cardiovascular disease. This study aimed to investigate the association of irAEs and treatment efficacy in patients undergoing ICI therapy with and without a cardiovascular history.Methods: We performed a retrospective review of the medical records of 409 consecutive patients who received ICI therapy from September 2014 to October 2019.Results: Median patient age was 69 years (29.6% were female). The median follow-up period was 278 days. In total, 69 (16.9%) patients had a history of any cardiovascular disease and 14 (3.4%) patients experienced cardiovascular irAEs after ICI administration. The rate of cardiovascular irAEs was higher in patients with prior non-cardiovascular irAEs than without. The prognosis of patients with irAEs (+) was significantly better than that of the patients without irAEs (P < 0.001); additionally, this tendency did not depend on the presence or absence of a cardiovascular history. Furthermore, the Cox proportional hazards analysis revealed that irAEs were an independent predictor of mortality.Conclusions: Occurrence of irAEs had a prognostic impact regardless of cardiovascular history. Although non-cardiovascular irAEs may be related to cardiovascular irAEs under ICI therapy, occurrence of this cardiotoxicity had little impact on prognosis.


2021 ◽  
pp. 000486742110096
Author(s):  
Oleguer Plana-Ripoll ◽  
Patsy Di Prinzio ◽  
John J McGrath ◽  
Preben B Mortensen ◽  
Vera A Morgan

Introduction: An association between schizophrenia and urbanicity has long been observed, with studies in many countries, including several from Denmark, reporting that individuals born/raised in densely populated urban settings have an increased risk of developing schizophrenia compared to those born/raised in rural settings. However, these findings have not been replicated in all studies. In particular, a Western Australian study showed a gradient in the opposite direction which disappeared after adjustment for covariates. Given the different findings for Denmark and Western Australia, our aim was to investigate the relationship between schizophrenia and urbanicity in these two regions to determine which factors may be influencing the relationship. Methods: We used population-based cohorts of children born alive between 1980 and 2001 in Western Australia ( N = 428,784) and Denmark ( N = 1,357,874). Children were categorised according to the level of urbanicity of their mother’s residence at time of birth and followed-up through to 30 June 2015. Linkage to State-based registers provided information on schizophrenia diagnosis and a range of covariates. Rates of being diagnosed with schizophrenia for each category of urbanicity were estimated using Cox proportional hazards models adjusted for covariates. Results: During follow-up, 1618 (0.4%) children in Western Australia and 11,875 (0.9%) children in Denmark were diagnosed with schizophrenia. In Western Australia, those born in the most remote areas did not experience lower rates of schizophrenia than those born in the most urban areas (hazard ratio = 1.02 [95% confidence interval: 0.81, 1.29]), unlike their Danish counterparts (hazard ratio = 0.62 [95% confidence interval: 0.58, 0.66]). However, when the Western Australian cohort was restricted to children of non-Aboriginal Indigenous status, results were consistent with Danish findings (hazard ratio = 0.46 [95% confidence interval: 0.29, 0.72]). Discussion: Our study highlights the potential for disadvantaged subgroups to mask the contribution of urban-related risk factors to risk of schizophrenia and the importance of stratified analysis in such cases.


Author(s):  
Claudius E. Degro ◽  
Richard Strozynski ◽  
Florian N. Loch ◽  
Christian Schineis ◽  
Fiona Speichinger ◽  
...  

Abstract Purpose Colorectal cancer revealed over the last decades a remarkable shift with an increasing proportion of a right- compared to a left-sided tumor location. In the current study, we aimed to disclose clinicopathological differences between right- and left-sided colon cancer (rCC and lCC) with respect to mortality and outcome predictors. Methods In total, 417 patients with colon cancer stage I–IV were analyzed in the present retrospective single-center study. Survival rates were assessed using the Kaplan–Meier method and uni/multivariate analyses were performed with a Cox proportional hazards regression model. Results Our study showed no significant difference of the overall survival between rCC and lCC stage I–IV (p = 0.354). Multivariate analysis revealed in the rCC cohort the worst outcome for ASA (American Society of Anesthesiologists) score IV patients (hazard ratio [HR]: 16.0; CI 95%: 2.1–123.5), CEA (carcinoembryonic antigen) blood level > 100 µg/l (HR: 3.3; CI 95%: 1.2–9.0), increased lymph node ratio of 0.6–1.0 (HR: 5.3; CI 95%: 1.7–16.1), and grade 4 tumors (G4) (HR: 120.6; CI 95%: 6.7–2179.6) whereas in the lCC population, ASA score IV (HR: 8.9; CI 95%: 0.9–91.9), CEA blood level 20.1–100 µg/l (HR: 5.4; CI 95%: 2.4–12.4), conversion to laparotomy (HR: 14.1; CI 95%: 4.0–49.0), and severe surgical complications (Clavien-Dindo III–IV) (HR: 2.9; CI 95%: 1.5–5.5) were identified as predictors of a diminished overall survival. Conclusion Laterality disclosed no significant effect on the overall prognosis of colon cancer patients. However, group differences and distinct survival predictors could be identified in rCC and lCC patients.


2021 ◽  
pp. 1-9
Author(s):  
Leonard Naymagon ◽  
Douglas Tremblay ◽  
John Mascarenhas

Data supporting the use of etoposide-based therapy in hemophagocytic lymphohistiocytosis (HLH) arise largely from pediatric studies. There is a lack of comparable data among adult patients with secondary HLH. We conducted a retrospective study to assess the impact of etoposide-based therapy on outcomes in adult secondary HLH. The primary outcome was overall survival. The log-rank test was used to compare Kaplan-Meier distributions of time-to-event outcomes. Multivariable Cox proportional hazards modeling was used to estimate adjusted hazard ratios (HRs) with 95% confidence intervals (CIs). Ninety adults with secondary HLH seen between January 1, 2009, and January 6, 2020, were included. Forty-two patients (47%) received etoposide-based therapy, while 48 (53%) received treatment only for their inciting proinflammatory condition. Thirty-three patients in the etoposide group (72%) and 32 in the no-etoposide group (67%) died during follow-up. Median survival in the etoposide and no-etoposide groups was 1.04 and 1.39 months, respectively. There was no significant difference in survival between the etoposide and no-etoposide groups (log-rank <i>p</i> = 0.4146). On multivariable analysis, there was no association between treatment with etoposide and survival (HR for death with etoposide = 1.067, 95% CI: 0.633–1.799, <i>p</i> = 0.8084). Use of etoposide-based therapy was not associated with improvement in outcomes in this large cohort of adult secondary HLH patients.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A Funada ◽  
Y Goto ◽  
T Maeda ◽  
H Okada ◽  
M Takamura

Abstract Background/Introduction Shockable rhythm after cardiac arrest is highly expected after early initiation of bystander cardiopulmonary resuscitation (CPR) owing to increased coronary perfusion. However, the relationship between bystander CPR and initial shockable rhythm in patients with out-of-hospital cardiac arrest (OHCA) remains unclear. We hypothesized that chest-compression-only CPR (CC-CPR) before emergency medical service (EMS) arrival has an equivalent effect on the likelihood of initial shockable rhythm to the standard CPR (chest compression plus rescue breathing [S-CPR]). Purpose We aimed to examine the rate of initial shockable rhythm and 1-month outcomes in patients who received bystander CPR after OHCA. Methods The study included 59,688 patients (age, ≥18 years) who received bystander CPR after an OHCA with a presumed cardiac origin witnessed by a layperson in a prospectively recorded Japanese nationwide Utstein-style database from 2013 to 2017. Patients who received public-access defibrillation before arrival of the EMS personnel were excluded. The patients were divided into CC-CPR (n=51,520) and S-CPR (n=8168) groups according to the type of bystander CPR received. The primary end point was initial shockable rhythm recorded by the EMS personnel just after arrival at the site. The secondary end point was the 1-month outcomes (survival and neurologically intact survival) after OHCA. In the statistical analyses, a Cox proportional hazards model was applied to reflect the different bystander CPR durations before/after propensity score (PS) matching. Results The crude rate of the initial shockable rhythm in the CC-CPR group (21.3%, 10,946/51,520) was significantly higher than that in the S-CPR group (17.6%, 1441/8168, p&lt;0.0001) before PS matching. However, no significant difference in the rate of initial shockable rhythm was found between the 2 groups after PS matching (18.3% [1493/8168] vs 17.6% [1441/8168], p=0.30). In the Cox proportional hazards model, CC-CPR was more negatively associated with the initial shockable rhythm before PS matching (unadjusted hazards ratio [HR], 0.97; 95% confidence interval [CI], 0.94–0.99; p=0.012; adjusted HR, 0.92; 95% CI, 0.89–0.94; p&lt;0.0001) than S-CPR. After PS matching, however, no significant difference was found between the 2 groups (adjusted HR of CC-CPR compared with S-CPR, 0.97; 95% CI, 0.94–1.00; p=0.09). No significant differences were found between C-CPR and S-CPR in the 1-month outcomes after PS matching as follows, respectively: survival, 8.5% and 10.1%; adjusted odds ratio, 0.89; 95% CI, 0.79–1.00; p=0.07; cerebral performance category 1 or 2, 5.5% and 6.9%; adjusted odds, 0.86; 95% CI, 0.74–1.00; p=0.052. Conclusions Compared with S-CPR, the CC-CPR before EMS arrival had an equivalent multivariable-adjusted association with the likelihood of initial shockable rhythm in the patients with OHCA due to presumed cardiac causes that was witnessed by a layperson. Funding Acknowledgement Type of funding source: None


2021 ◽  
Vol 44 (4) ◽  
pp. 145-152
Author(s):  
Hualei Guo ◽  
Hao Chen ◽  
Wenhui Wang ◽  
Lingna Chen

Objective: The aim of this study was to investigate the clinicopathological prognostic factors of malignant ovarian germ cell tumors (MOGCT) and evaluate the survival trends of MOGCT by histotype. Methods: We extracted data on 1,963 MOGCT cases diagnosed between 2000 and 2014 from the Surveillance, Epidemiology, and End Results (SEER) database and the histological classification of MOGCT, including 5 categories: dysgerminoma, embryonal carcinoma (EC), yolk sac tumor, malignant teratoma, and mixed germ cell tumor. We examined overall and disease-specific survival of the 5 histological types. Kaplan-Meier and Cox proportional hazards regression models were used to estimate survival curves and prognostic factors. We also estimated survival curves of MOGCT according to different treatments. Results: There was a significant difference in prognosis among different histological classifications. Age, histotype, grade, SEER stage, and surgery were independent prognostic factors for survival of patients with MOGCT. For all histotypes, 1-, 3-, and 5-year survival rate estimates were >85%, except for EC, which had the worst outcomes at 1 year (55.6%), 3 years (44.4%), and 5 years (33.3%). In the distant SEER stage, both chemotherapy and surgery were associated with improved survival outcomes compared with surgery- and chemotherapy-only groups. Conclusions: Dysgerminoma patients had the most favorable outcomes, whereas EC patients had the worst survival. A young age, low grade, and surgery were all significant predictors for improved survival. In contrast, a distant SEER stage was a risk factor for poor survival. Chemotherapy combined with surgery contributed to longer survival times of patients with MOGCT in the distant SEER stage.


Author(s):  
Katherine R Sabourin ◽  
Ibrahim Daud ◽  
Sidney Ogolla ◽  
Nazzarena Labo ◽  
Wendell Miley ◽  
...  

Abstract Background We aimed to determine whether Plasmodium falciparum (Pf) infection affects age of Kaposi sarcoma-associated herpesvirus (KSHV) seroconversion in Kenyan children. Methods Kenyan children (n=144) enrolled at age one month, from two sites with different levels of malaria transmission (stable/high malaria vs. unstable/low malaria transmission) were followed through 24 months. Plasma was tested for KSHV antibodies using enzyme-linked immunosorbent assay (ELISA) (K8.1 and LANA) and a multiplex bead-based assay (K8.1, K10.5, ORF38, ORF50, and LANA) and whole blood tested for Pf DNA using quantitative-PCR. Cox proportional hazards models were used to assess associations between Pf DNA detection, malaria annualized rate (Pf detections/person-years), and enrollment site (malaria-high vs malaria-low) with time to KSHV seroconversion. Results KSHV seroprevalence was 63% by 2 years of age when assessed by multiplex assay. Children with Pf were at increased hazards of earlier KSHV seroconversion and among children with malaria, the hazard of becoming KSHV seropositive increased significantly with increasing malaria annualized rate. Children from the malaria-high transmission region had no significant difference in hazards of KSHV seroconversion at 12 months but were more likely to become KSHV seropositive by 24 months of age. Discussion Malaria exposure increases the risk for KSHV seroconversion early in life.


Sign in / Sign up

Export Citation Format

Share Document