The Influence of Renal Centre and Patient Sociodemographic Factors on Home Haemodialysis Prevalence in the UK

Nephron ◽  
2017 ◽  
Vol 136 (2) ◽  
pp. 62-74 ◽  
Author(s):  
Anuradha Jayanti ◽  
Philip Foden ◽  
Alasdair Rae ◽  
Julie Morris ◽  
Paul Brenchley ◽  
...  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
H. Burdett ◽  
N. T. Fear ◽  
S. Wessely ◽  
R. J. Rona

Abstract Background Around 8% of the UK Armed Forces leave in any given year, and must navigate unfamiliar civilian systems to acquire employment, healthcare, and other necessities. This paper determines longer-term prevalences of mental ill health and socioeconomic outcomes in UK Service leavers, and how they are related to demographic factors, military history, and pre-enlistment adversity. Methods This study utilised data from a longitudinal sample of a cohort study UK Armed Forces personnel since 2003. A range of self-reported military and sociodemographic factors were analysed as predictors of probable Post-Traumatic Stress Disorder, common mental disorders, alcohol misuse, unemployment and financial hardship. Prevalences and odds ratios of associations between predictors and outcomes were estimated for regular veterans in this cohort. Results Veteran hardship was mostly associated with factors linked to socio-economic status: age, education, and childhood adversity. Few military-specific factors predicted mental health or socio-economic hardship, except method of leaving (where those leaving due to medical or unplanned discharge were more likely to encounter most forms of hardship as veterans), and rank which is itself related to socioeconomic status. Conclusion Transition and resettlement provisions become increasingly generous with longer service, yet this paper shows the need for those services becomes progressively less necessary as personnel acquire seniority and skills, and instead could be best targeted at unplanned leavers, taking socioeconomic status into consideration. Many will agree that longer service should be more rewarded, but the opposite is true if provision instead reflects need rather than length of service. This is a social, political and ethical dilemma.


2021 ◽  
pp. e1-e9
Author(s):  
Dylan B. Jackson ◽  
Alexander Testa ◽  
Rebecca L. Fix ◽  
Tamar Mendelson

Objectives. To explore associations between police stops, self-harm, and attempted suicide among a large, representative sample of adolescents in the United Kingdom. Methods. Data were drawn from the 3 most recent sweeps of the UK Millennium Cohort Study (MCS), from 2012 to 2019. The MCS is an ongoing nationally representative contemporary birth cohort of children born in the United Kingdom between September 2000 and January 2002 (n = 10 345). Weights were used to account for sample design and multiple imputation for missing data. Results. Youths experiencing police stops by the age of 14 years (14.77%) reported significantly higher rates of self-harm (incidence rate ratio = 1.52; 95% confidence interval [CI] = 1.35, 1.69) at age 17 years and significantly higher odds of attempted suicide (odds ratio = 2.25; 95% CI = 1.84, 2.76) by age 17 years. These patterns were largely consistent across examined features of police stops and generally did not vary by sociodemographic factors. In addition, 17.73% to 40.18% of associations between police stops and outcomes were explained by mental distress. Conclusions. Police-initiated encounters are associated with youth self-harm and attempted suicide. Youths may benefit when school counselors or social workers provide mental health screenings and offer counseling care following these events. (Am J Public Health. Published online ahead of print September 23, 2021: e1–e9. https://doi.org/10.2105/AJPH.2021.306434 )


2020 ◽  
Vol 150 (8) ◽  
pp. 2164-2174
Author(s):  
Marilyn C Cornelis ◽  
Sandra Weintraub ◽  
Martha Clare Morris

ABSTRACT Background Coffee and tea are the major contributors of caffeine in the diet. Evidence points to the premise that caffeine may benefit cognition. Objective We examined the associations of habitual regular coffee or tea and caffeine intake with cognitive function whilst additionally accounting for genetic variation in caffeine metabolism. Methods We included white participants aged 37–73 y from the UK Biobank who provided biological samples and completed touchscreen questionnaires regarding sociodemographic factors, medical history, lifestyle, and diet. Habitual caffeine-containing coffee and tea intake was self-reported in cups/day and used to estimate caffeine intake. Between 97,369 and 445,786 participants with data also completed ≥1 of 7 self-administered cognitive functioning tests using a touchscreen system (2006–2010) or on home computers (2014). Multivariable regressions were used to examine the association between coffee, tea, or caffeine intake and cognition test scores. We also tested interactions between coffee, tea, or caffeine intake and a genetic-based caffeine-metabolism score (CMS) on cognitive function. Results After multivariable adjustment, reaction time, Pairs Matching, Trail Making test B, and symbol digit substitution, performance significantly decreased with consumption of 1 or more cups of coffee (all tests P-trend < 0.0001). Tea consumption was associated with poor performance on all tests (P-trend < 0.0001). No statistically significant CMS × tea, CMS × coffee, or CMS × caffeine interactions were observed. Conclusions Our findings, based on the participants of the UK Biobank, provide little support for habitual consumption of regular coffee or tea and caffeine in improving cognitive function. On the contrary, we observed decrements in performance with intakes of these beverages which may be a result of confounding. Whether habitual caffeine intake affects cognitive function therefore remains to be tested.


BMJ Open ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. e032021 ◽  
Author(s):  
Jennifer Cleland ◽  
Gordon Prescott ◽  
Kim Walker ◽  
Peter Johnston ◽  
Ben Kumwenda

IntroductionKnowledge about the career decisions of doctors in relation to specialty (residency) training is essential in terms of UK workforce planning. However, little is known about which doctors elect to progress directly from Foundation Year 2 (F2) into core/specialty/general practice training and those who instead opt for an alternative next career step.ObjectiveTo identify if there were any individual differences between these two groups of doctors.DesignThis was a longitudinal, cohort study of ‘home’ students who graduated from UK medical schools between 2010 and 2015 and completed the Foundation Programme (FP) between 2012 and 2017.We used the UK Medical Education Database (UKMED) to access linked data from different sources, including medical school performance, specialty training applications and career preferences. Multivariable regression analyses were used to predict the odds of taking time out of training based on various sociodemographic factors.Results18 380/38 905 (47.2%) of F2 doctors applied for, and accepted, a training post offer immediately after completing F2. The most common pattern for doctors taking time out of the training pathway after FP was to have a 1-year (7155: 38.8%) or a 2-year break (2605: 14.0%) from training. The odds of not proceeding directly into core or specialty training were higher for those who were male, white, entered medical school as (high) school leavers and whose parents were educated to degree level. Doctors from areas of low participation in higher education were significantly (0.001) more likely to proceed directly into core or specialty training.ConclusionThe results show that UK doctors from higher socioeconomic groups are less likely to choose to progress directly from the FP into specialty training. The data suggest that widening access and encouraging more socioeconomic diversity in our medical students may be helpful in terms of attracting F2s into core/specialty training posts.


BMJ Open ◽  
2019 ◽  
Vol 9 (12) ◽  
pp. e033011
Author(s):  
Drew M Altschul ◽  
Christina Wraw ◽  
Catharine R Gale ◽  
Ian J Deary

ObjectivesWe investigated how youth cognitive and sociodemographic factors are associated with the aetiology of overweight and obesity. We examined both onset (who is at early risk for overweight and obesity) and development (who gains weight and when).DesignProspective cohort study.SettingWe used data from the US National Longitudinal Study of Youth 1979 (NLSY) and the UK National Child Development Study (NCDS); most of both studies completed a cognitive function test in youth.Participants12 686 and 18 558 members of the NLSY and NCDS, respectively, with data on validated measures of youth cognitive function, youth socioeconomic disadvantage (eg, parental occupational class and time spent in school) and educational attainment. Height, weight and income data were available from across adulthood, from individuals’ 20s into their 50s.Primary and secondary outcome measuresBody mass index (BMI) for four time points in adulthood. We modelled gain in BMI using latent growth curve models to capture linear and quadratic components of change in BMI over time.ResultsAcross cohorts, higher cognitive function was associated with lower overall BMI. In the UK, 1 SD higher score in cognitive function was associated with lower BMI (β=−0.20, 95% CI −0.33 to −0.06 kg/m²). In America, this was true only for women (β=−0.53, 95% CI −0.90 to −0.15 kg/m²), for whom higher cognitive function was associated with lower BMI. In British participants only, we found limited evidence for negative and positive associations, respectively, between education (β=−0.15, 95% CI −0.26 to −0.04 kg/m²) and socioeconomic disadvantage (β=0.33, 95% CI 0.23 to 0.43 kg/m²) and higher BMI. Overall, no cognitive or socioeconomic factors in youth were associated with longitudinal changes in BMI.ConclusionsWhile sociodemographic and particularly cognitive factors can explain some patterns in individuals’ overall weight levels, differences in who gains weight in adulthood could not be explained by any of these factors.


2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Ricky Ellis ◽  
Duncan Scrimgeour ◽  
Jennifer Cleland ◽  
Amanda Lee ◽  
Peter Brennan

Abstract Aims UK medical schools vary in their mission, curricula and pedagogy, but little is known of the effect of this on postgraduate examination performance. We explored differences in outcomes at the Membership of the Royal College of Surgeons examination (MRCS) between medical schools, course types, national ranking and candidate sociodemographic factors. Methods A retrospective longitudinal study of all UK medical graduates who attempted MRCS Part A (n = 9730) and MRCS Part B (n = 4645) between 2007 and 2017, utilising the UK Medical Education Database (https://www.ukmed.ac.uk). We examined the relationship between medical school and success at first attempt of the MRCS using univariate analysis. Logistic regression modelling was used to identify independent predictors of MRCS success. Results MRCS pass rates differed significantly between medical schools (P < 0.001). Russell Group graduates were more likely to pass MRCS Part A (Odds Ratio (OR) 1.79 [95% Confidence Interval (CI) 1.56-2.05]) and Part B (OR 1.24 [1.03-1.49])).  Trainees from Standard-Entry 5-year programmes were more likely to pass MRCS at first attempt compared to those from extended (Gateway) courses, Part A OR 3.72 [2.69-5.15]; Part B (OR 1.67 [1.02-2.76]. Non-graduates entering medical school were more likely to pass Part A (OR 1.40 [1.19-1.64]) and Part B (OR 1.66 [1.24-2.24]) than graduate-entrants. Conclusion Medical school, course type and socio-demographic factors are associated with success on the MRCS. This information will help to identify surgical trainees at risk of failing the MRCS in order for schools of surgery to redistribute resources to those in need.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Damien Ashby ◽  
Natalie Borman ◽  
James Burton ◽  
Richard Corbett ◽  
Andrew Davenport ◽  
...  

Abstract This guideline is written primarily for doctors and nurses working in dialysis units and related areas of medicine in the UK, and is an update of a previous version written in 2009. It aims to provide guidance on how to look after patients and how to run dialysis units, and provides standards which units should in general aim to achieve. We would not advise patients to interpret the guideline as a rulebook, but perhaps to answer the question: “what does good quality haemodialysis look like?” The guideline is split into sections: each begins with a few statements which are graded by strength (1 is a firm recommendation, 2 is more like a sensible suggestion), and the type of research available to back up the statement, ranging from A (good quality trials so we are pretty sure this is right) to D (more like the opinion of experts than known for sure). After the statements there is a short summary explaining why we think this, often including a discussion of some of the most helpful research. There is then a list of the most important medical articles so that you can read further if you want to – most of this is freely available online, at least in summary form. A few notes on the individual sections: This section is about how much dialysis a patient should have. The effectiveness of dialysis varies between patients because of differences in body size and age etc., so different people need different amounts, and this section gives guidance on what defines “enough” dialysis and how to make sure each person is getting that. Quite a bit of this section is very technical, for example, the term “eKt/V” is often used: this is a calculation based on blood tests before and after dialysis, which measures the effectiveness of a single dialysis session in a particular patient. This section deals with “non-standard” dialysis, which basically means anything other than 3 times per week. For example, a few people need 4 or more sessions per week to keep healthy, and some people are fine with only 2 sessions per week – this is usually people who are older, or those who have only just started dialysis. Special considerations for children and pregnant patients are also covered here. This section deals with membranes (the type of “filter” used in the dialysis machine) and “HDF” (haemodiafiltration) which is a more complex kind of dialysis which some doctors think is better. Studies are still being done, but at the moment we think it’s as good as but not better than regular dialysis. This section deals with fluid removal during dialysis sessions: how to remove enough fluid without causing cramps and low blood pressure. Amongst other recommendations we advise close collaboration with patients over this. This section deals with dialysate, which is the fluid used to “pull” toxins out of the blood (it is sometimes called the “bath”). The level of things like potassium in the dialysate is important, otherwise too much or too little may be removed. There is a section on dialysate buffer (bicarbonate) and also a section on phosphate, which occasionally needs to be added into the dialysate. This section is about anticoagulation (blood thinning) which is needed to stop the circuit from clotting, but sometimes causes side effects. This section is about certain safety aspects of dialysis, not seeking to replace well-established local protocols, but focussing on just a few where we thought some national-level guidance would be useful. This section draws together a few aspects of dialysis which don’t easily fit elsewhere, and which impact on how dialysis feels to patients, rather than the medical outcome, though of course these are linked. This is where home haemodialysis and exercise are covered. There is an appendix at the end which covers a few aspects in more detail, especially the mathematical ideas. Several aspects of dialysis are not included in this guideline since they are covered elsewhere, often because they are aspects which affect non-dialysis patients too. This includes: anaemia, calcium and bone health, high blood pressure, nutrition, infection control, vascular access, transplant planning, and when dialysis should be started.


2011 ◽  
Vol 4 (suppl 3) ◽  
pp. iii1-iii3 ◽  
Author(s):  
S. Mitra ◽  
M. Brady ◽  
D. O'Donoghue
Keyword(s):  

2013 ◽  
Vol 17 (1) ◽  
pp. 20-30 ◽  
Author(s):  
Lukar E Thornton ◽  
Jamie R Pearce ◽  
Kylie Ball

AbstractObjectiveTo investigate the associations between sociodemographic factors and both diet indicators and food security among socio-economically disadvantaged populations in two different (national) contextual settings.DesignLogistic regression was used to determine cross-sectional associations between nationality, marital status, presence of children in the household, education, employment status and household income (four low income categories) with daily fruit and vegetable consumption, low-fat milk consumption and food security.SettingSocio-economically disadvantaged neighbourhoods in the UK and Australia.SubjectsTwo samples of low-income women from disadvantaged neighbourhoods: (i) in the UK, the 2003–05 Low Income Diet and Nutrition Survey (LIDNS; n 643); and (ii) in Australia, the 2007–08 Resilience for Eating and Activity Despite Inequality (READI; n 1340).ResultsThe influence of nationality, marital status and children in the household on the dietary outcomes varied between the two nations. Obtaining greater education qualifications was the most telling factor associated with healthier dietary behaviours. Being employed was positively associated with low-fat milk consumption in both nations and with fruit consumption in the UK, while income was not associated with dietary behaviours in either nation. In Australia, the likelihood of being food secure was higher among those who were born outside Australia, married, employed or had a greater income, while higher income was the only significant factor in the UK.ConclusionsThe identification of factors that differently influence dietary behaviours and food security in socio-economically disadvantaged populations in the UK and Australia suggests continued efforts need to be made to ensure that interventions and policy responses are informed by the best available local evidence.


Author(s):  
Lamiece Hassan ◽  
Niels Peek ◽  
Karina Lovell ◽  
Andre F. Carvalho ◽  
Marco Solmi ◽  
...  

AbstractPeople with severe mental illness (SMI; including schizophrenia/psychosis, bipolar disorder (BD), major depressive disorder (MDD)) experience large disparities in physical health. Emerging evidence suggests this group experiences higher risks of infection and death from COVID-19, although the full extent of these disparities are not yet established. We investigated COVID-19 related infection, hospitalisation and mortality among people with SMI in the UK Biobank (UKB) cohort study. Overall, 447,296 participants from UKB (schizophrenia/psychosis = 1925, BD = 1483 and MDD = 41,448, non-SMI = 402,440) were linked with healthcare and death records. Multivariable logistic regression analysis was used to examine differences in COVID-19 outcomes by diagnosis, controlling for sociodemographic factors and comorbidities. In unadjusted analyses, higher odds of COVID-19 mortality were seen among people with schizophrenia/psychosis (odds ratio [OR] 4.84, 95% confidence interval [CI] 3.00–7.34), BD (OR 3.76, 95% CI 2.00–6.35), and MDD (OR 1.99, 95% CI 1.69–2.33) compared to people with no SMI. Higher odds of infection and hospitalisation were also seen across all SMI groups, particularly among people with schizophrenia/psychosis (OR 1.61, 95% CI 1.32–1.96; OR 3.47, 95% CI 2.47–4.72) and BD (OR 1.48, 95% CI 1.16–1.85; OR 3.31, 95% CI 2.22–4.73). In fully adjusted models, mortality and hospitalisation odds remained significantly higher among all SMI groups, though infection odds remained significantly higher only for MDD. People with schizophrenia/psychosis, BD and MDD have higher risks of COVID-19 infection, hospitalisation and mortality. Only a proportion of these disparities were accounted for by pre-existing demographic characteristics or comorbidities. Vaccination and preventive measures should be prioritised in these particularly vulnerable groups.


Sign in / Sign up

Export Citation Format

Share Document