scholarly journals Changes in Spinal Instability After Combined Therapy of Conventional Radiotherapy and Bone Modifying Agents for Painful Vertebral Bone Metastases

Author(s):  
Eiji Nakata ◽  
Shinsuke Sugihara ◽  
Yoshifumi Sugawara ◽  
Ryuichi Nakahara ◽  
Shouta Takihira ◽  
...  

Abstract Precise assessment of spinal instability is critical at the beginning and after radiotherapy for selection of the treatment and evaluating the effectiveness of radiotherapy. We investigated changes of spinal instability after radiotherapy and examined potential risk factors for the difference of the outcome of spinal instability for painful spinal metastases. We evaluated 81 patients who received radiotherapy for painful vertebral metastases in our institution between 2012 and 2016. The pain at the vertebrae was assessed. Radiological responses of irradiated vertebrae were assessed by computed tomography. Spinal instability was assessed by Spinal Instability Neoplastic Score (SINS). Follow-up assessments were done at the start of radiotherapy and at 1, 2, 3, 4, and 6 months after radiotherapy. At each of one to six months, pain disappeared in 62%, 84%, 93%, 98%, and 100% of patients. The median SINS were 8, 7, 6, 5, 5, and 4 at the beginning of radiotherapy and after 1, 2, 3, 4, and 6 months, respectively, which significantly decreased over time (P < 0.001). Multivariate analysis revealed that PLISE was the only risk factor for spinal instability at one month. In conclusion, spinal instability significantly improved over time after radiotherapy. Clinicians should take attention to PLISE in the radiotherapy of vertebral metastases.

2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
T M Mikkola ◽  
H Kautiainen ◽  
M Mänty ◽  
M B von Bonsdorff ◽  
T Kröger ◽  
...  

Abstract Purpose Mortality appears to be lower in family caregivers than in the general population. However, there is lack of knowledge whether the difference in mortality between family caregivers and the general population is dependent on age. The purpose of this study was to analyze all-cause mortality in relation to age in family caregivers and to study their cause-specific mortality using data from multiple Finnish national registers. Methods The data included all individuals, who received family caregiver's allowance in Finland in 2012 (n = 42 256, mean age 67 years, 71% women) and a control population matched for age, sex, and municipality of residence (n = 83 618). Information on dates and causes of death between 2012 and 2017 were obtained from the Finnish Causes of Death Register. Flexible parametric survival modeling and competing risk regression adjusted for socioeconomic status were used. Results The total follow-up time was 717 877 person-years. Family caregivers had lower all-cause mortality than the controls over the follow-up (8.1% vs. 11.6%) both among women (hazard ratio [HR]: 0.64, 95% CI: 0.61-0.68) and men (HR: 0.73, 95% CI: 0.70-0.77). Younger adult caregivers had equal or only slightly lower mortality than their controls, but after age 60, the difference increased markedly resulting in over 10% lower mortality in favor of the caregivers in the oldest age groups. Caregivers had lower mortality for all the causes of death studied, namely cardiovascular, cancer, neurological, external, respiratory, gastrointestinal and dementia than the controls. Of these, the lowest was the risk for dementia (subhazard ratio=0.29, 95%CI: 0.25-0.34). Conclusions Older family caregivers have lower mortality than the age-matched controls from the general population while younger caregivers have similar mortality to their peers. This age-dependent advantage in mortality is likely to reflect selection of healthier individuals into the family caregiver role. Key messages The difference in mortality between family caregivers and the age-matched general population varies considerably with age. Advantage in mortality observed in family caregiver studies is likely to reflect the selection of healthier individuals into the caregiver role, which underestimates the adverse effects of caregiving.


Author(s):  
Koen B Pouwels ◽  
Thomas House ◽  
Julie V Robotham ◽  
Paul Birrell ◽  
Andrew B Gelman ◽  
...  

Objective: To estimate the percentage of individuals infected with severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) over time in the community in England and to quantify risk factors. Design: Repeated cross-sectional surveys of population-representative households with longitudinal follow-up if consent given. Setting: England. Participants: 34,992 Individuals aged 2 years and over from 16,722 private residential households. Data were collected in a pilot phase of the survey between 26 April and 28 June 2020. Main outcome measures: Percentage of individuals in the community testing positive for SARS-CoV-2 RNA using throat and nose swabs. Individuals were asked about any symptoms and potential risk factors. Results: The percentage of people in private-residential households testing positive for SARS-CoV-2 reduced from 0.32% (95% credible interval (CrI) 0.19% to 0.52%) on 26 April to 0.08% (95% CrI 0.05% to 0.12%) on 28 June, although the prevalence stabilised near the end of the pilot. Factors associated with an increased risk of testing positive included having a job with direct patient contact (relative exposure (RE) 4.06, 95% CrI 2.42 to 6.77)), working outside the home (RE 2.49, 95% CrI 1.39 to 4.45), and having had contact with a hospital (RE 2.20, 95% CrI 1.09 to 4.16 for having been to a hospital individually and RE 1.95, 95% CrI 0.81 to 4.09 for a household member having been to a hospital). In 133 visits where individuals tested positive, 82 (61%, 95% CrI 53% to 69%) reported no symptoms, stably over time. Conclusion: The percentage of SARS-CoV-2 positive individuals declined between 26 April and 28 June 2020. Positive tests commonly occurred without symptoms being reported. Working outside your home was an important risk factor, indicating that continued monitoring for SARS-CoV-2 in the community will be essential for early detection of increases in infections following return to work and other relaxations of control measures.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Hui-Juan Xia ◽  
Wei-Jun Wang ◽  
Feng’E Chen ◽  
Ying Wu ◽  
Zhen-Yuan Cai ◽  
...  

Objective. To observe the fellow eye in patients undergoing surgery on one eye for treating myopic traction maculopathy.Methods. 99 fellow eyes of consecutive patients who underwent unilateral surgery to treat MTM were retrospectively evaluated. All patients underwent thorough ophthalmologic examinations, including age, gender, duration of follow-up, refraction, axial length, intraocular pressure, lens status, presence/absence of a staphyloma, and best-corrected visual acuity (BCVA). Fundus photographs and SD-OCT images were obtained. When feasible, MP-1 microperimetry was performed to evaluate macular sensitivity and fixation stability.Results. At an average follow-up time of 24.7 months, 7% fellow eyes exhibited partial or complete MTM resolution, 68% stabilized, and 25% exhibited progression of MTM. Of the 38 eyes with “normal” macular structure on initial examination, 11% exhibited disease progression. The difference in progression rates in Groups 2, 3, and 4 was statistically significant. Refraction, axial length, the frequency of a posterior staphyloma, chorioretinal atrophy, initial BCVA, final BCVA, and retinal sensitivity all differed significantly among Groups 1–4.Conclusions. Long axial length, chorioretinal atrophy, a posterior staphyloma, and anterior traction contribute to MTM development. Patients with high myopia and unilateral MTM require regular OCT monitoring of the fellow eye to assess progression to myopic pre-MTM. For cases exhibiting one or more potential risk factors, early surgical intervention may maximize the visual outcomes.


2019 ◽  
Vol 41 (21) ◽  
pp. 1976-1986 ◽  
Author(s):  
Sérgio Barra ◽  
Rui Providência ◽  
Kumar Narayanan ◽  
Serge Boveda ◽  
Rudolf Duehmke ◽  
...  

Abstract Aims While data from randomized trials suggest a declining incidence of sudden cardiac death (SCD) among heart failure patients, the extent to which such a trend is present among patients with cardiac resynchronization therapy (CRT) has not been evaluated. We therefore assessed changes in SCD incidence, and associated factors, in CRT recipients over the last 20 years. Methods and results Literature search from inception to 30 April 2018 for observational and randomized studies involving CRT patients, with or without defibrillator, providing specific cause-of-death data. Sudden cardiac death was the primary endpoint. For each study, rate of SCD per 1000 patient-years of follow-up was calculated. Trend line graphs were subsequently constructed to assess change in SCD rates over time, which were further analysed by device type, patient characteristics, and medical therapy. Fifty-three studies, comprising 22 351 patients with 60 879 patient-years of follow-up and a total of 585 SCD, were included. There was a gradual decrease in SCD rates since the early 2000s in both randomized and observational studies, with rates falling more than four-fold. The rate of decline in SCD was steeper than that of all-cause mortality, and accordingly, the proportion of deaths which were due to SCD declined over the years. The magnitude of absolute decline in SCD was more prominent among CRT-pacemaker (CRT-P) patients compared to those receiving CRT-defibrillator (CRT-D), with the difference in SCD rates between CRT-P and CRT-D decreasing considerably over time. There was a progressive increase in age, use of beta-blockers, and left ventricular ejection fraction, and conversely, a decrease in QRS duration and antiarrhythmic drug use. Conclusion Sudden cardiac death rates have progressively declined in the CRT heart failure population over time, with the difference between CRT-D vs. CRT-P recipients narrowing considerably.


Stroke ◽  
2017 ◽  
Vol 48 (suppl_1) ◽  
Author(s):  
George Harston ◽  
James Kennedy

Introduction: The Acute Stroke Imaging Roadmap III identifies structural distortion due to vasogenic edema and hemorrhage as a research priority for defining final infarction. Non-linear registration (NLR) of a follow up scan to an undistorted presenting scan could correct for distortions due to edema, hemorrhage or atrophy, achieving this goal. In addition, the difference between the volume of infarction following NLR and the volume following a rigid body registration (RBR) reflects the degree of anatomical distortion. In this study we evaluate this technique to correct for subacute edema at different timepoints, and generate a metric to quantify brain swelling at these times. We determine whether early edema at 24 hours predicts edema at 1 week. Methods: Patients with non-lacunar ischemic stroke were recruited into a MRI study. Patients had structural T1-weighted, T2-weighted FLAIR and diffusion-weighted imaging (DWI, b=1000/0) at presentation, 24hrs, 1wk and 1mo. Infarction was defined manually at 24hrs using DWI, and at 1wk and 1mo using FLAIR image by 2 raters. To quantify edema, both NLR warps and RBR matrices were generated between the T1 images at each timepoint to the presenting T1 scan. Infarct masks were transformed to presenting image space using RBR and NLR, and the relative difference in volumes used to quantify the Edema Metric (EM). Results: 34 patients were recruited into the study. NLR corrected for distortions due to edema and hemorrhagic transformation at the 24hr and 1wk timepoints. The EM at 24 hours, 1 week and 1 month were 17.7% (p=0.009), 26.5% (p=0.02), and 7.1% (p=0.05) respectively for the manually defined infarct masks. EM at 24 hours predicted edema at 1 week (r 2 =37%, p=0.009), but not at 1 month (r 2 =3%, p=0.6). Conclusions: NLR provides an opportunity to correct for edema at subacute timepoints and by comparing infarct volumes to those following RBR provides a measure of edema. The EM quantifies the contribution of edema at 24hrs and 1wk, and potentially allows the selection of patients at 24hrs who are likely to develop significant swelling at 1 week. The EM may also be useful in stroke trials to quantify the effect sizes of treatments aimed at minimizing edema in stroke.


BMJ Open ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. e036484
Author(s):  
Chuong Huu Thieu Do ◽  
Malene Landbo Børresen ◽  
Freddy Karup Pedersen ◽  
Ronald Bertus Geskus ◽  
Alexandra Yasmin Kruse

ObjectivesTo describe the characteristics of rehospitalisation in Vietnamese preterm infants and to examine the time-to-first-readmission between two gestational age (GA) groups (extremely/very preterm (EVP) vs moderate/late preterm (MLP)); and further to compare rehospitalisation rates according to GA and corrected age (CA), and to examine the association between potential risk factors and rehospitalisation rates.Design and settingA cohort study to follow up preterm infants discharged from a neonatal intensive care unit (NICU) of a tertiary children’s hospital in Vietnam.ParticipantsAll preterm newborns admitted to the NICU from July 2013 to September 2014.Main outcomesRates, durations and causes of hospital admission during the first 2 years.ResultsOf 294 preterm infants admitted to NICU (all outborn, GA ranged from 26 to 36 weeks), 255 were discharged alive, and 211 (83%) NICU graduates were followed up at least once during the first 2 years CA, of whom 56% were hospital readmitted. The median (IQR) of hospital stay was 7 (6–10) days. Respiratory diseases were the major cause (70%). Compared with MLP infants, EVP infants had a higher risk of first rehospitalisation within the first 6 months of age (p=0.01). However, the difference in risk declined thereafter and was similar from 20 months of age. There was an interaction in rehospitalisation rates between GA and CA. Longer duration of neonatal respiratory support and having older siblings were associated with higher rehospitalisation rates. Lower rates of rehospitalisation were seen in infants with higher cognitive and motor scores (not statistically significant in cognitive scores).ConclusionsHospital readmission of Vietnamese preterm infants discharged from NICU was frequent during their first 2 years, mainly due to respiratory diseases. Scale-up of follow-up programmes for preterm infants is needed in low-income and middle-income countries and attempts to prevent respiratory diseases should be considered.


Author(s):  
J.-F. Masson ◽  
Peter Collins ◽  
Sladana Bundalo-Perc ◽  
John R. Woods ◽  
Imad Al-Qadi

Bituminous crack sealants are used for the preventive maintenance of asphalt concrete pavements. The selection of a durable sealant can be difficult, however, mainly because of the lack of correlation between standard sealant specifications and field performance. Hence, an approved list of materials based on past performance is sometimes used to select sealants. However, sealant durability and performance vary over time. To investigate the effect of sealant lot variation on sealant properties, six lots of two sealants from different suppliers were analyzed for filler and polymer contents and rheological response. It was found that the difference in composition and rheology between lots can be similar to that between sealants produced by different manufacturers. Hence, sealant lot-to-lot variation can partly explain the variation in the field performance of sealants. Therefore, lists of approved products drawn from the field performance of past years are ineffective in the selection of sealants for future maintenance. The application of segregated sealants was also investigated, including assessing the effect of melter stirring on sealant homogeneity and measuring the segregation of sealant upon cooling. It was found that sealants do not segregate after their application and subsequent cooling and that a rapid circumferential stirring of 25 revolutions per minute in the heating kettle allowed for the remixing of a segregated sealant.


2018 ◽  
Vol 25 (1) ◽  
pp. 53 ◽  
Author(s):  
M. Dosani ◽  
S. Lucas ◽  
J. Wong ◽  
L. Weir ◽  
S. Lomas ◽  
...  

Background The Spinal Instability Neoplastic Score (sins) was developed to identify patients with spinal metastases who may benefit from surgical consultation. We aimed to assess the distribution of sins in a population-based cohort of patients undergoing palliative spine radiotherapy (rt) and referral rates to spinal surgery pre-rt. Secondary outcomes included referral to a spine surgeon post-rt, overall survival, maintenance of ambulation, need for re-intervention, and presence of spinal adverse events.Methods We retrospectively reviewed ct simulation scans and charts of consecutive patients receiving palliative spine rt between 2012 and 2013. Data were analyzed using Student’s t-test, Chi-squared, Fisher’s exact, and Kaplan-Meier log-rank tests. Patients were stratified into low (<7) and high (≥7) sins groups.Results We included 195 patients with a follow-up of 6.1 months. The median sins was 7. The score was 0 to 6 (low, no referral recommended), 7 to 12 (intermediate, consider referral), and 13 to 18 (high, referral suggested) in 34%, 59%, and 7% of patients, respectively. Eleven patients had pre-rt referral to spine surgery, with a surgery performed in 0 of 1 patient with sins 0 to 6, 1 of 7 with sins 7 to 12, and 1 of 3 with sins 13 to 18. Seven patients were referred to a surgeon post-rt with salvage surgery performed in two of those patients. Primary and secondary outcomes did not differ between low and high sins groups.Conclusion Higher sins was associated with pre-rt referral to a spine surgeon, but most patients with high sins were not referred. Higher sins was not associated with shorter survival or worse outcome following rt.


2021 ◽  
pp. 219256822110469
Author(s):  
Zach Pennington ◽  
Jose L. Porras ◽  
Sheng-Fu Larry Lo ◽  
Daniel M. Sciubba

Study Design International survey. Objectives To assess variability in the treatment practices for spinal metastases as a function of practice setting, surgical specialty, and fellowship training among an international group of spine surgeons. Methods An anonymous internet-based survey was disseminated to the AO Spine membership. The questionnaire contained items on practice settings, fellowship training, indications used for spinal metastasis surgery, surgical strategies, multidisciplinary team use, and postoperative follow-up priorities and practice. Results 341 gave complete responses to the survey with 76.3% identifying spinal oncology as a practice focus and 95.6% treating spinal metastases. 80% use the Spinal Instability Neoplastic Score (SINS) to guide instrumentation decision-making and 60.7% recruit multidisciplinary teams for some or all cases. Priorities for postoperative follow-up are adjuvant radiotherapy (80.9%) and systemic therapy (74.8%). Most schedule first follow-up within 6 weeks of surgery (62.2%). Significant response heterogeneity was seen when stratifying by practice in an academic or university-affiliated center, practice in a cancer center, completion of a spine oncology fellowship, and self-identification as a tumor specialist. Respondents belonging to any of these categories were more likely to utilize SINS ( P < .01-.02), recruit assistance from plastic surgeons (all P < .01), and incorporate radiation oncologists in postoperative care ( P < .01-.03). Conclusions The largest variability in practice strategies is based upon practice setting, spine tumor specialization, and completion of a spine oncology fellowship. These respondents were more likely to use evidenced-based practices. However, the response variability indicates the need for consensus building, particularly for postoperative spine metastasis care pathways and multidisciplinary team use.


2018 ◽  
Vol 25 (06) ◽  
pp. 805-809
Author(s):  
H. Sabiha Akhtar ◽  
Zia Farooq ◽  
Hassan Rathore ◽  
Muhammad Umar Farooq ◽  
Arooj Ahmad

Background: Cholecystectomy is the surgical removal of the inflamedgallbladder. Advancement in technology has led to many treatment options and methods ofcholecystectomy but the selection of right method depends upon severity of disease alongwith available resources and expertise. Objective: To compare the frequency of biliary leakagewith clipless versus clipped laparoscopic cholecystectomy for management of cholecystitis.Material & Methods: Study design: Randomized control trial. Setting: Unit 1, Departmentof Surgery, Jinnah hospital, Lahore. Duration: It was conducted for a period of six monthsfrom July 2016 to January 2017. Data collection: A total of 130 patients were included in thestudy using nonprobability consecutive sampling and were randomly divided in two groups byusing lottery method. In group A, clip-less Harmonic scalpel was used along with Ultrasonicshear. In group B, the conventional instruments were used with the application of clips. Patientswere called for follow-up in OPD after 1 week to assess biliary leakage through MRCP. All thedata was collected through a pre-designed proforma. The data was entered and analyzed inSPSS version 20. Results: The mean age of patients was 42.97±10.77 years with 68 (52.31%)patients were male and 62 (47.69%) patients were females. The biliary leakage was noted in29 cases i.e. 9 from clipless group and 20 were from clipped group and the difference wasstatistically significant (p-value=0.020) Conclusion: It can be concluded from this study thatthe frequency of biliary leakage is significantly higher in the clipped LC for management ofCholecystitis. Thus it is encouraging to use clipless method to avoid such complication andimprove surgical outcomes.


Sign in / Sign up

Export Citation Format

Share Document