scholarly journals The use of Cohort Size Shrinkage Index (CSSI) to quantify regional famine intensity during the Chinese famine of 1959-61

Author(s):  
Chunyu Liu ◽  
Chihua Li ◽  
Zhenwei Zhou ◽  
Hongwei Xu ◽  
L. H. Lumey

There has been a growing interest in studying the causes and impact of the Great Chinese Famine of 1959-61. The Cohort Size Shrinkage Index (CSSI) is the most widely used measure to examine famine intensity and was used in at least 28 Chinese famine studies to date. We examined the potential impact of violations of three requirements for a valid CSSI measure: reliable information on cohort size by year of birth; a stable trend of cohort size by year of birth; and the absence of significant regional migration. We used data from the 1% China 2000 Census to examine the trend of cohort size over time and concentrated on the time window between 1950-70 to exclude policies and events with a large impact on birth trends other than the famine itself. Across China we established a significant difference in cohort size trends between pre-famine births and post-famine births, violating one of the main requirements for a valid CSSI measure. This leads to systematic differences in CSSI depending on what non-famine years are selected for comparison. At the province level, CSSIs estimated based on pre- & post-famine births tend to overestimate famine intensity at higher exposure levels and underestimate intensity at lower levels compared to CSSIs based on pre-famine births alone. This is problematic and demonstrates that the CSSI is not as robust an estimator of famine intensity as had been assumed previously. We recommend therefore that all CSSI should be based on pre-famine birth trends. Using data from Sichuan province, we demonstrate a less pronounced dose-response relation between famine intensity and tuberculosis outcomes using pre-famine based CSSI as compared to reported patterns based on pre- & post-famine based CSSI. We encourage researchers to re-examine their results of Chinese famine studies as local differences in cohort size of pre-famine and post-famine births may lead to significant discrepancies of CSSI estimation and change the interpretation of their findings.

2021 ◽  
Vol 37 (2) ◽  
Author(s):  
Shan Cao ◽  
Hui Dong

Objective: To investigate the efficacy and safety of endovascular treatment in patients having acute ischemic stroke with over-time window under DWI-FLAIR mismatch. Methods: From January 2018 to January 2020, 80 patients who met the research criteria in the First Central Hospital of Baoding, China were selected. According to the time of onset, they were divided into test group and control group, with 40 cases in each group. Forty patients in the test group were beyond time window (6~24h) and the MRI showed a DWI-FLAIR mismatch. Forty patients in the control group were within the time window (< 6h). All patients received endovascular treatment (EVT). The mRS, NIHSS and infarct volume of patients in the test group were compared and analyzed before and 30 and 90 days after treatment, as well as the indicators of both groups of patients before and after treatment, to determine therapeutic effect in patients receiving EVT beyond time window. Meanwhile, the recanalization of the blood vessel and the incidence of cerebral hemorrhage of patients in both groups were compared to determine the safety in patients receiving EVT beyond time window under DWI-FLAIR mismatch. Results: The mRS, NIHSS and infarct size in the test group were significantly improved before and 30 and 90 days after treatment (p<0.05). The test group showed no significant difference in mRS, NIHSS and other indicators when compared with the control group (p>0.05). There was no significant difference in the rate of recanalization of the blood vessel and intracranial hemorrhage after treatment between both groups (p>0.05). Conclusion: DWI-FLAIR mismatch can be used as an objective imaging basis for intravascular interventional therapy in patients with stroke with over-time window and large vessel occlusion. It has the advantages of short examination time, non-invasiveness, no need for contrast agents, simple implementation, clear guidance. doi: https://doi.org/10.12669/pjms.37.2.3293 How to cite this:Cao S, Dong H. Predictive value of DWI-FLAIR Mismatch in patients with Ischemic Stroke and receiving Endovascular treatment beyond Time Window. Pak J Med Sci. 2021;37(2):---------. doi: https://doi.org/10.12669/pjms.37.2.3293 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


1967 ◽  
Vol 56 (4) ◽  
pp. 619-625 ◽  
Author(s):  
Hans Jacob Koed ◽  
Christian Hamburger

ABSTRACT Comparison of the dose-response curves for LH of ovine origin (NIH-LH-S8) and of human origin (IRP-HMG-2) using the OAAD test showed a small, though statistically significant difference, the dose-response curve for LH of human origin being a little flatter. Two standard curves for ovine LH obtained with 14 months' interval, were parallel but at different levels of ovarian ascorbic acid. When the mean ascorbic acid depletions were calculated as percentages of the control levels, the two curves for NIH-LH-S8 were identical. The use of standards of human origin in the OAAD test for LH activity of human preparations is recommended.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alan Feiveson ◽  
Kerry George ◽  
Mark Shavers ◽  
Maria Moreno-Villanueva ◽  
Ye Zhang ◽  
...  

AbstractSpace radiation consists of energetic protons and other heavier ions. During the International Space Station program, chromosome aberrations in lymphocytes of astronauts have been analyzed to estimate received biological doses of space radiation. More specifically, pre-flight blood samples were exposed ex vivo to varying doses of gamma rays, while post-flight blood samples were collected shortly and several months after landing. Here, in a study of 43 crew-missions, we investigated whether individual radiosensitivity, as determined by the ex vivo dose–response of the pre-flight chromosome aberration rate (CAR), contributes to the prediction of the post-flight CAR incurred from the radiation exposure during missions. Random-effects Poisson regression was used to estimate subject-specific radiosensitivities from the preflight dose–response data, which were in turn used to predict post-flight CAR and subject-specific relative biological effectiveness (RBEs) between space radiation and gamma radiation. Covariates age, gender were also considered. Results indicate that there is predictive value in background CAR as well as radiosensitivity determined preflight for explaining individual differences in post-flight CAR over and above that which could be explained by BFO dose alone. The in vivo RBE for space radiation was estimated to be approximately 3 relative to the ex vivo dose response to gamma irradiation. In addition, pre-flight radiosensitivity tended to be higher for individuals having a higher background CAR, suggesting that individuals with greater radiosensitivity can be more sensitive to other environmental stressors encountered in daily life. We also noted that both background CAR and radiosensitivity tend to increase with age, although both are highly variable. Finally, we observed no significant difference between the observed CAR shortly after mission and at > 6 months post-mission.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Madison E. Andrews ◽  
Anita D. Patrick ◽  
Maura Borrego

Abstract Background Students’ attitudinal beliefs related to how they see themselves in STEM have been a focal point of recent research, given their well-documented links to retention and persistence. These beliefs are most often assessed cross-sectionally, and as such, we lack a thorough understanding of how they may fluctuate over time. Using matched survey responses from undergraduate engineering students (n = 278), we evaluate if, and to what extent, students’ engineering attitudinal beliefs (attainment value, utility value, self-efficacy, interest, and identity) change over a 1-year period. Further, we examine whether there are differences based on gender and student division, and then compare results between cross-sectional and longitudinal analyses to illustrate weaknesses in our current understanding of these constructs. Results Our study revealed inconsistencies between cross-sectional and longitudinal analyses of the same dataset. Cross-sectional analyses indicated a significant difference by student division for engineering utility value and engineering interest, but no significant differences by gender for any variable. However, longitudinal analyses revealed statistically significant decreases in engineering utility value, engineering self-efficacy, and engineering interest for lower division students and significant decreases in engineering attainment value for upper division students over a one-year period. Further, longitudinal analyses revealed a gender gap in engineering self-efficacy for upper division students, where men reported higher means than women. Conclusions Our analyses make several contributions. First, we explore attitudinal differences by student division not previously documented. Second, by comparing across methodologies, we illustrate that different conclusions can be drawn from the same data. Since the literature around these variables is largely cross-sectional, our understanding of students’ engineering attitudes is limited. Our longitudinal analyses show variation in engineering attitudinal beliefs that are obscured when data is only examined cross-sectionally. These analyses revealed an overall downward trend within students for all beliefs that changed significantly—losses which may foreshadow attrition out of engineering. These findings provide an opportunity to introduce targeted interventions to build engineering utility value, engineering self-efficacy, and engineering interest for student groups whose means were lower than average.


Author(s):  
Andrea Maugeri ◽  
Martina Barchitta ◽  
Roberta Magnano San Lio ◽  
Maria Clara La Rosa ◽  
Claudia La Mastra ◽  
...  

Several studies—albeit with still inconclusive and limited findings—began to focus on the effect of drinking alcohol on telomere length (TL). Here, we present results from a systematic review of these epidemiological studies to investigate the potential association between alcohol consumption, alcohol-related disorders, and TL. The analysis of fourteen studies—selected from PubMed, Medline, and Web of Science databases—showed that people with alcohol-related disorders exhibited shorter TL, but also that alcohol consumption per se did not appear to affect TL in the absence of alcohol abuse or dependence. Our work also revealed a lack of studies in the periconceptional period, raising the need for evaluating this potential relationship during pregnancy. To fill this gap, we conducted a pilot study using data and samples form the Mamma & Bambino cohort. We compared five non-smoking but drinking women with ten non-smoking and non-drinking women, matched for maternal age, gestational age at recruitment, pregestational body mass index, and fetal sex. Interestingly, we detected a significant difference when analyzing relative TL of leukocyte DNA of cord blood samples from newborns. In particular, newborns from drinking women exhibited shorter relative TL than those born from non-drinking women (p = 0.024). Although these findings appeared promising, further research should be encouraged to test any dose–response relationship, to adjust for the effect of other exposures, and to understand the molecular mechanisms involved.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Åsa Kettis ◽  
Hanna Fagerlind ◽  
Jan-Erik Frödin ◽  
Bengt Glimelius ◽  
Lena Ring

Abstract Background Effective patient-physician communication can improve patient understanding, agreement on treatment and adherence. This may, in turn, impact on clinical outcomes and patient quality of life (QoL). One way to improve communication is by using patient-reported outcome measures (PROMs). Heretofore, studies of the impact of using PROMs in clinical practice have mostly evaluated the use of standardized PROMs. However, there is reason to believe that individualized instruments may be more appropriate for this purpose. The aim of this study is to compare the effectiveness of the standardized QoL-instrument, the European Organization for Research and Treatment of Cancer Quality of Life C-30 (EORTC-QOL-C30) and the individualized QoL instrument, the Schedule for the Evaluation of Individual Quality of Life-Direct Weighting (SEIQoL-DW), in clinical practice. Methods In a prospective, open-label, controlled intervention study at two hospital out-patient clinics, 390 patients with gastrointestinal cancer were randomly assigned either to complete the EORTC-QOL-C30 or the SEIQoL-DW immediately before the consultation, with their responses being shared with their physician. This was repeated in 3–5 consultations over a period of 4–6 months. The primary outcome measure was patients’ health-related QoL, as measured by FACIT-G. Patients’ satisfaction with the consultation and survival were secondary outcomes. Results There was no significant difference between the groups with regard to study outcomes. Neither intervention instrument resulted in any significant changes in health-related QoL, or in any of the secondary outcomes, over time. This may reflect either a genuine lack of effect or sub-optimization of the intervention. Since there was no comparison to standard care an effect in terms of lack of deterioration over time cannot be excluded. Conclusions Future studies should focus on the implementation process, including the training of physicians to use the instruments and their motivation for doing so. The effects of situational use of standardized or individualized instruments should also be explored. The effectiveness of the different approaches may depend on contextual factors including physician and patient preferences.


1991 ◽  
Vol 68 (3_suppl) ◽  
pp. 1283-1290 ◽  
Author(s):  
P. A. Holland ◽  
I. Bowskill ◽  
A. Bailey

The hypothesis that predictable differences would exist between the mean cognitive style of new entrants and those of the longer serving “established” employees in certain departments while not in others was tested. Data from 99 employees from four departments of a large British pharmaceuticals company who completed the Kirton Adaption-Innovation Inventory provided results broadly in line with the expectations of adaption-innovation theory and past research. The mean innovative cognitive style of new entrants to adaptive departments regressed towards the mean of the establishment and the occupational mean over time. In departments where there was no initial significant difference between the mean cognitive style of the new entrants and the established group, no significant shift was shown over time. Implications of these findings are suggested. The data also indicated norms for two occupational groups where previously they did not exist.


Open Heart ◽  
2021 ◽  
Vol 8 (1) ◽  
pp. e001600
Author(s):  
Joanne Kathryn Taylor ◽  
Haarith Ndiaye ◽  
Matthew Daniels ◽  
Fozia Ahmed

AimsIn response to the COVID-19 pandemic, the UK was placed under strict lockdown measures on 23 March 2020. The aim of this study was to quantify the effects on physical activity (PA) levels using data from the prospective Triage-HF Plus Evaluation study.MethodsThis study represents a cohort of adult patients with implanted cardiac devices capable of measuring activity by embedded accelerometery via a remote monitoring platform. Activity data were available for the 4 weeks pre-implementation and post implementation of ‘stay at home’ lockdown measures in the form of ‘minutes active per day’ (min/day).ResultsData were analysed for 311 patients (77.2% men, mean age 68.8, frailty 55.9%. 92.2% established heart failure (HF) diagnosis, of these 51.2% New York Heart Association II), with comorbidities representative of a real-world cohort.Post-lockdown, a significant reduction in median PA equating to 20.8 active min/day was seen. The reduction was uniform with a slightly more pronounced drop in PA for women, but no statistically significant difference with respect to age, body mass index, frailty or device type. Activity dropped in the immediate 2-week period post-lockdown, but steadily returned thereafter. Median activity week 4 weeks post-lockdown remained significantly lower than 4 weeks pre-lockdown (p≤0.001).ConclusionsIn a population of predominantly HF patients with cardiac devices, activity reduced by approximately 20 min active per day in the immediate aftermath of strict COVID-19 lockdown measures.Trial registration numberNCT04177199.


2021 ◽  
Vol 80 (Suppl 1) ◽  
pp. 480-480
Author(s):  
S. S. Zhao ◽  
E. Nikiphorou ◽  
A. Young ◽  
P. Kiely

Background:Rheumatoid arthritis (RA) is classically described as a symmetric small joint polyarthritis with additional involvement of large joints. There is a paucity of information concerning the time course of damage in large joints, such as shoulder, elbow, hip, knee and ankle, from early to established RA, or of the influence of Rheumatoid Factor (RF) status. There is a historic perception that patients who do not have RF follow a milder less destructive course, which might promote less aggressive treatment strategies in RF-negative patients. The historic nature of the Ealy Rheumatoid Arthritis Study (ERAS) provides a unique opportunity to study RA in the context of less aggressive treatment strategies.Objectives:To examine the progression of large joint involvement from early to established RA in terms of range of movement (ROM) and time to joint surgery, according to the presence of RF.Methods:ERAS was a multi-centre inception cohort of newly diagnosed RA patients (<2 years disease duration, csDMARD naive), recruited from 1985-2001 with yearly follow-up for up to 25 (median 10) years. First line treatment was csDMARD monotherapy with/without steroids, favouring sulphasalazine for the majority. Outcome data was recorded at baseline, at 12 months and then once yearly. Patients were deemed RF negative if all repeated assessments were negative. ROM of individual shoulder, elbow, wrist, hip, knee, ankle and hindfeet joints was collected at 3, 5, 9 and 12-15 years. The rate of progression from normal to any loss of ROM, from years 3 to 14 was modelled using GEE, adjusting for confounders. Radiographs of wrists taken at years 0, 1, 2, 3, 5, 7, 9 were scored according to the Larsen method. Change in the Larsen wrist damage score was modelled using GEE as a continuous variable, while the erosion score was dichotomised into present/absent. Surgical procedure data were obtained by linking to Hospital Episodes Statistics and the National Joint Registry. Time to joint surgery was analysed using multivariable Cox models.Results:A total of 1458 patients from the ERAS cohort were included (66% female, mean age 55 years) and 74% were RF-positive. The prevalence of any loss of ROM, from year 3 through to 14 was highest in the wrist followed by ankle, knee, elbow and hip. The proportion of patients at year 9 with greater than 25% loss of ROM was: wrist 30%, ankle 12%, elbow 7%, knee 7% and hip 5%. Odds of loss of ROM increased over time in all joint regions, at around 7 to 13% per year from year 3 to 14. There was no significant difference between RF-positive and RF-negative patients (see Figure 1). Larsen erosion and damage scores at the wrists progressed in all patients; annual odds of developing any erosions were higher in RF-positives OR 1.28 (95%CI 1.24-1.32) than RF-negatives OR 1.17 (95%CI 1.09-1.26), p 0.013. Time to surgery was similar according to RF-status for the wrist and ankle, but RF-positive cases had a lower hazard of surgery at the elbow (HR 0.37, 0.15-0.90), hip (HR 0.69, 0.48-0.99) and after 10 years at the knee (HR 0.41, 0.25-0.68). Adjustment of the models for Lawrence assessed osteoarthritis of hand and feet radiographs did not influence these results.Figure 1.Odds of progression to any loss of ROM (from no loss of ROM) per year in the overall population and stratified by RF status.Conclusion:Large joints become progressively involved in RA, most frequently affecting the wrist followed by ankle, which is overlooked in some composite disease activity indices. We confirm a higher burden of erosions and damage at the wrists in RF-positive patients, but have not found RF-negative patients to have a better prognosis over time with respect to involvement of other large joints. In contrast RF-negative patients had more joint surgery at the elbow, hip, and knee after 10 years. There is no justification to adopt a less aggressive treatment strategy for RF-negative RA. High vigilance and treat-to-target approaches should be followed irrespective of RF status.Disclosure of Interests:None declared


Sign in / Sign up

Export Citation Format

Share Document