scholarly journals Electroencephalographic Evidence for Individual Neural Inertia in Mice That Decreases With Time

2022 ◽  
Vol 15 ◽  
Author(s):  
Andrzej Z. Wasilczuk ◽  
Qing Cheng Meng ◽  
Andrew R. McKinstry-Wu

Previous studies have demonstrated that the brain has an intrinsic resistance to changes in arousal state. This resistance is most easily measured at the population level in the setting of general anesthesia and has been termed neural inertia. To date, no study has attempted to determine neural inertia in individuals. We hypothesize that individuals with markedly increased or decreased neural inertia might be at increased risk for complications related to state transitions, from awareness under anesthesia, to delayed emergence or confusion/impairment after emergence. Hence, an improved theoretical and practical understanding of neural inertia may have the potential to identify individuals at increased risk for these complications. This study was designed to explicitly measure neural inertia in individuals and empirically test the stochastic model of neural inertia using spectral analysis of the murine EEG. EEG was measured after induction of and emergence from isoflurane administered near the EC50 dose for loss of righting in genetically inbred mice on a timescale that minimizes pharmacokinetic confounds. Neural inertia was assessed by employing classifiers constructed using linear discriminant or supervised machine learning methods to determine if features of EEG spectra reliably demonstrate path dependence at steady-state anesthesia. We also report the existence of neural inertia at the individual level, as well as the population level, and that neural inertia decreases over time, providing direct empirical evidence supporting the predictions of the stochastic model of neural inertia.

2019 ◽  
Vol 117 (3) ◽  
pp. 1621-1627 ◽  
Author(s):  
Aaron C. Miller ◽  
Alejandro P. Comellas ◽  
Douglas B. Hornick ◽  
David A. Stoltz ◽  
Joseph E. Cavanaugh ◽  
...  

Autosomal recessive diseases, such as cystic fibrosis (CF), require inheritance of 2 mutated genes. However, some studies indicate that CF carriers are at increased risk for some conditions associated with CF. These investigations focused on single conditions and included small numbers of subjects. Our goal was to determine whether CF carriers are at increased risk for a range of CF-related conditions. Using the Truven Health MarketScan Commercial Claims database (2001–2017), we performed a population-based retrospective matched-cohort study. We identified 19,802 CF carriers and matched each carrier with 5 controls. The prevalence of 59 CF-related diagnostic conditions was evaluated in each cohort. Odds ratios for each condition were computed for CF carriers relative to controls. All 59 CF-related conditions were more prevalent among carriers compared with controls, with significantly increased risk (P < 0.05) for 57 conditions. Risk was increased for some conditions previously linked to CF carriers (e.g., pancreatitis, male infertility, bronchiectasis), as well as some conditions not previously reported (e.g., diabetes, constipation, cholelithiasis, short stature, failure to thrive). We compared our results with 23,557 subjects with CF, who were also matched with controls; as the relative odds of a given condition increased among subjects with CF, so did the corresponding relative odds for carriers (P < 0.001). Although individual-level risk remained low for most conditions, because there are more than 10 million carriers in the US, population-level morbidity attributable to the CF carrier state is likely substantial. Genetic testing may inform prevention, diagnosis, and treatment for a broad range of CF carrier-related conditions.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Lennart Dimberg ◽  
Bo Eriksson ◽  
Per Enqvist

Abstract Background In 1993, 1000 randomly selected employed Swedish men aged 45–50 years were invited to a nurse-led health examination with a survey on life style, fasting lab tests, and a 12-lead ECG. A repeat examination was offered in 1998. The ECGs were classified according to the Minnesota Code. Upon ethical approval, endpoints in terms of MI and death over 25 years were collected from Swedish national registers with the purpose of analyzing the independent association of ECG abnormalities as risk factors for myocardial infarction and death. Results Seventy-nine of 977 participants had at least one ECG abnormality 1993 or 1998. One hundred participants had a first MI over the 25 years. Odds ratio for having an MI in the group that had one or more ECG abnormality compared with the group with two normal ECGs was estimated to 3.16. 95%CI (1.74; 5.73), p value 0.0001. One hundred fifty-seven participants had died before 2019. For death, similarly no statistically significant difference was shown, OR 1.52, 95%CI (0.83; 2.76). Conclusions Our study suggests that presence of ST- and R-wave changes is associated with an independent 3–4-fold increased risk of MI after 25 years follow-up, but not of death. A 12-lead resting ECG should be included in any MI risk calculation on an individual level.


2021 ◽  
Vol 34 (3) ◽  
pp. 234-241
Author(s):  
Norrina B Allen ◽  
Sadiya S Khan

Abstract High blood pressure (BP) is a strong modifiable risk factor for cardiovascular disease (CVD). Longitudinal BP patterns themselves may reflect the burden of risk and vascular damage due to prolonged cumulative exposure to high BP levels. Current studies have begun to characterize BP patterns as a trajectory over an individual’s lifetime. These BP trajectories take into account the absolute BP levels as well as the slope of BP changes throughout the lifetime thus incorporating longitudinal BP patterns into a single metric. Methodologic issues that need to be considered when examining BP trajectories include individual-level vs. population-level group-based modeling, use of distinct but complementary BP metrics (systolic, diastolic, mean arterial, mid, and pulse pressure), and potential for measurement errors related to varied settings, devices, and number of readings utilized. There appear to be very specific developmental periods during which divergent BP trajectories may emerge, specifically adolescence, the pregnancy period, and older adulthood. Lifetime BP trajectories are impacted by both individual-level and community-level factors and have been associated with incident hypertension, multimorbidity (CVD, renal disease, cognitive impairment), and overall life expectancy. Key unanswered questions remain around the additive predictive value of BP trajectories, intergenerational contributions to BP patterns (in utero BP exposure), and potential genetic drivers of BP patterns. The next phase in understanding BP trajectories needs to focus on how best to incorporate this knowledge into clinical care to reduce the burden of hypertensive-related outcomes and improve health equity.


2021 ◽  
Vol 13 (1) ◽  
pp. 368
Author(s):  
Dillon T. Fitch ◽  
Hossain Mohiuddin ◽  
Susan L. Handy

One way cities are looking to promote bicycling is by providing publicly or privately operated bike-share services, which enable individuals to rent bicycles for one-way trips. Although many studies have examined the use of bike-share services, little is known about how these services influence individual-level travel behavior more generally. In this study, we examine the behavior of users and non-users of a dockless, electric-assisted bike-share service in the Sacramento region of California. This service, operated by Jump until suspended due to the coronavirus pandemic, was one of the largest of its kind in the U.S., and spanned three California cities: Sacramento, West Sacramento, and Davis. We combine data from a repeat cross-sectional before-and-after survey of residents and a longitudinal panel survey of bike-share users with the goal of examining how the service influenced individual-level bicycling and driving. Results from multilevel regression models suggest that the effect of bike-share on average bicycling and driving at the population level is likely small. However, our results indicate that people who have used-bike share are likely to have increased their bicycling because of bike-share.


2021 ◽  
pp. 1-9
Author(s):  
R. Cunningham ◽  
A. Milner ◽  
S. Gibb ◽  
V. Rijnberg ◽  
G. Disney ◽  
...  

Abstract Background Unemployment and being not in the labour force (NILF) are risk factors for suicide, but their association with self-harm is unclear, and there is continuing debate about the role of confounding by prior mental health conditions. We examine associations between employment status and self-harm and suicide in a prospective cohort, taking into account prior mental-health-related factors. Methods We used linked data from the New Zealand Integrated Data Infrastructure. The outcomes were chosen to be hospital presentation for self-harm and death by suicide. The exposure was employment status, defined as employed, unemployed, or NILF, measured at the 2013 Census. Confounders included demographic factors and mental health history (use of antidepressant medication, use of mental health services, and prior self-harm). Logistic regression was used to model effects. Analyses were stratified by gender. Results For males, unemployment was associated with an increased risk of suicide [odds ratio (OR): 1.48, 95% confidence interval (CI): 1.20–1.84] and self-harm (OR: 1.55, 95% CI: 1.45–1.68) after full adjustment for confounders. NILF was associated with an increased risk of self-harm (OR: 1.43, 95% CI: 1.32–1.55), but less of an association was seen with suicide (OR: 1.19, 95% CI: 0.94–1.49). For females, unemployment was associated with an increased risk of suicide (OR: 1.30, 95% CI: 0.93–1.80) and of self-harm (OR: 1.52, 95% CI: 1.43–1.62), and NILF was associated with a similar increase in risk for suicide (OR: 1.31, 95% CI: 0.98–1.75) and self-harm (OR: 1.32, 95% CI: 1.26–1.40). Discussion Exclusion from employment is associated with a considerably heightened risk of suicide and self-harm for both men and women, even among those without prior mental health problems.


Author(s):  
Marie Krousel-Wood ◽  
Leslie S Craig ◽  
Erin Peacock ◽  
Emily Zlotnick ◽  
Samantha O’Connell ◽  
...  

Abstract Interventions targeting traditional barriers to antihypertensive medication adherence (AHMA) have been developed and evaluated, with evidence of modest improvements in adherence. Translation of these interventions into population-level improvements in adherence and clinical outcomes among older adults remains suboptimal. From the Cohort Study of Medication Adherence among Older adults (CoSMO), we evaluated traditional barriers to AHMA among older adults with established hypertension (N=1544; mean age=76.2 years, 59.5% women, 27.9% Black, 24.1% and 38.9% low adherence by proportion of days covered (i.e., PDC&lt;0.80) and the 4-item Krousel-Wood Medication Adherence Scale (i.e., K-Wood-MAS-4≥1), respectively), finding that they explained 6.4% and 14.8% of variance in pharmacy refill and self-reported adherence, respectively. Persistent low adherence rates, coupled with low explanatory power of traditional barriers, suggest that other factors warrant attention. Prior research has investigated explicit attitudes toward medications as a driver of adherence; the roles of implicit attitudes and time preferences (e.g., immediate versus delayed gratification) as mechanisms underlying adherence behavior are emerging. Similarly, while associations of individual-level social determinants of health (SDOH) and medication adherence are well-reported, there is growing evidence about structural SDOH and specific pathways of effect. Building on published conceptual models and recent evidence, we propose an expanded conceptual framework that incorporates implicit attitudes, time preferences and structural SDOH, as emerging determinants that may explain additional variation in objectively and subjectively measured adherence. This model provides guidance for design, implementation and assessment of interventions targeting sustained improvement in implementation medication adherence and clinical outcomes among older women and men with hypertension.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
D. Andrew Wilkinson ◽  
Neil Majmundar ◽  
Joshua Catapano ◽  
Tyler Cole ◽  
Jacob Baranoski ◽  
...  

Background and Purpose: Transradial access (TRA) for neuroendovascular procedures is increasing in prevalence, although numerous procedures are still performed using transfemoral access (TFA). Some cardiology studies have suggested the safety benefits of TRA at a patient level may be offset at a population level by a paradoxical increase in TFA vascular access site complications (VASCs) associated with radial adoption, the so-called “radial paradox.” We studied the effect of TRA adoption on TFA performance and VASC rates in neuroendovascular procedures. Methods: Data were collected for all neuroendovascular procedures performed over a 10-month period by trainees after implementation of a radial-first paradigm at a single center. Results: Over the study period, 1,084 procedures were performed, including 689 (63.6%) via TRA and 395 (36.4%) via TFA. In comparison to TRA, TFA cases were performed in older patients (TFA 63 ±15 vs. TRA 56 ±16), were predominantly male (TFA 52.9% vs. TRA 38.6%), used larger sheath sizes (≥7 French, TFA 56.6% vs. TRA 2.3%), were more often emergent (TFA 37.7% vs. TRA 1.1%), and used tPA administration (TFA 15.3% vs. TRA 0%) (p<.001 for all comparisons). Overall, 29 VASCs occurred (2.7%), including 27 minor (TFA 4.6% [18/395] vs. TRA 1.3% [9/689], p=.002) and 2 major (TFA 0.3% [1/395] vs. TRA 0.1% [1/689], p>.99) complications. After multivariate analysis, independent predictors of any VASC included TFA (OR 2.8, 95% CI 1.1-7.4) and use of dual antiplatelets (OR 4.2, 95% CI 1.6—11.1). Conclusions: TFA remains an essential route for neuroendovascular procedures, accounting for 36.4% of cases under a radial-first paradigm. TFA is disproportionately performed in patients undergoing procedures with an increased-risk for VASCs, though the minor and major VASC rates are comparable to historical controls. TFA proficiency may still be achieved in radial-first training without an increase in femoral complications.


Circulation ◽  
2012 ◽  
Vol 125 (suppl_10) ◽  
Author(s):  
Bakhtawar K Mahmoodi ◽  
Ron T Gansevoort ◽  
Inger Anne Naess ◽  
Pamela L Lutsey ◽  
Sigrid K Braekkan ◽  
...  

Background: Recent findings suggest that mild chronic kidney disease (CKD) might be associated with increased risk of venous thromboembolism (VTE). However, results were partially inconsistent, which may be due to lack of power. We therefore performed a meta-analysis to investigate the association between mild CKD and VTE incidence. Methods: A literature search was performed to retrieve community-based cohorts with information on the association of estimated glomerular filtration rate (eGFR) and albuminuria with VTE. Five cohorts were identified that were pooled on individual level. To obtain pooled hazard ratios (HRs) for VTE, linear spline models were fitted using Cox regression with shared-frailty. Models were adjusted for age, sex, hypertension, total cholesterol, smoking, diabetes, history of cardiovascular disease and body-mass index. Random-effect meta-analysis was used to obtain adjusted pooled HRs of VTE with CKD versus no CKD. Results: The analysis included 95,154 participants with 1,178 VTE cases and 599,453 person-years of follow-up. Risk of VTE increased continuously with lower eGFR and higher ACR (Figure). Compared with eGFR 100 mL/min/1.73m², pooled adjusted HRs for VTE were 1.3 (1.0–1.7) for eGFR 60, 1.8 (1.3–2.6) for 45 and 1.9 (1.2–2.9) for 30 mL/min/1.73m². Compared with albumin-creatinine ratio (ACR) 5 mg/g, pooled adjusted HRs for VTE were 1.3 (1.04–1.7) for ACR 30, 1.6 (1.1–2.4) for 300 and 1.9 (1.2–3.1) for 1000 mg/g. There was no evidence for interaction between eGFR and ACR (P=0.22). The pooled adjusted HR for CKD (eGFR <60 ml/min/1.73m² or albuminuria ≥30 mg/g) vs. no CKD was 1.5 (95%CI, 1.2–2.1). Results were similar for idiopathic and provoked VTE. Conclusion: Both reduced eGFR and elevated albuminuria are novel independent predictors of VTE in the general population.


2018 ◽  
Vol 148 (12) ◽  
pp. 1946-1953 ◽  
Author(s):  
Magali Rios-Leyvraz ◽  
Pascal Bovet ◽  
René Tabin ◽  
Bernard Genin ◽  
Michel Russo ◽  
...  

ABSTRACT Background The gold standard to assess salt intake is 24-h urine collections. Use of a urine spot sample can be a simpler alternative, especially when the goal is to assess sodium intake at the population level. Several equations to estimate 24-h urinary sodium excretion from urine spot samples have been tested in adults, but not in children. Objective The objective of this study was to assess the ability of several equations and urine spot samples to estimate 24-h urinary sodium excretion in children. Methods A cross-sectional study of children between 6 and 16 y of age was conducted. Each child collected one 24-h urine sample and 3 timed urine spot samples, i.e., evening (last void before going to bed), overnight (first void in the morning), and morning (second void in the morning). Eight equations (i.e., Kawasaki, Tanaka, Remer, Mage, Brown with and without potassium, Toft, and Meng) were used to estimate 24-h urinary sodium excretion. The estimates from the different spot samples and equations were compared with the measured excretion through the use of several statistics. Results Among the 101 children recruited, 86 had a complete 24-h urine collection and were included in the analysis (mean age: 10.5 y). The mean measured 24-h urinary sodium excretion was 2.5 g (range: 0.8–6.4 g). The different spot samples and equations provided highly heterogeneous estimates of the 24-h urinary sodium excretion. The overnight spot samples with the Tanaka and Brown equations provided the most accurate estimates (mean bias: −0.20 to −0.12 g; correlation: 0.48–0.53; precision: 69.7–76.5%; sensitivity: 76.9–81.6%; specificity: 66.7%; and misclassification: 23.0–27.7%). The other equations, irrespective of the timing of the spot, provided less accurate estimates. Conclusions Urine spot samples, with selected equations, might provide accurate estimates of the 24-h sodium excretion in children at a population level. At an individual level, they could be used to identify children with high sodium excretion. This study was registered at clinicaltrials.gov as NCT02900261.


2017 ◽  
Author(s):  
Alex Mesoudi

AbstractHow do migration and acculturation (i.e. psychological or behavioral change resulting from migration) affect within- and between-group cultural variation? Here I answer this question by drawing analogies between genetic and cultural evolution. Population genetic models show that migration rapidly breaks down between-group genetic structure. In cultural evolution, however, migrants or their descendants can acculturate to local behaviors via social learning processes such as conformity, potentially preventing migration from eliminating between-group cultural variation. An analysis of the empirical literature on migration suggests that acculturation is common, with second and subsequent migrant generations shifting, sometimes substantially, towards the cultural values of the adopted society. Yet there is little understanding of the individual-level dynamics that underlie these population-level shifts. To explore this formally, I present models quantifying the effect of migration and acculturation on between-group cultural variation, for both neutral and costly cooperative traits. In the models, between-group cultural variation, measured using F statistics, is eliminated by migration and maintained by conformist acculturation. The extent of acculturation is determined by the strength of conformist bias and the number of demonstrators from whom individuals learn. Acculturation is countered by assortation, the tendency for individuals to preferentially interact with culturally-similar others. Unlike neutral traits, cooperative traits can additionally be maintained by payoff-biased social learning, but only in the presence of strong sanctioning institutions. Overall, the models show that surprisingly little conformist acculturation is required to maintain realistic amounts of between-group cultural diversity. While these models provide insight into the potential dynamics of acculturation and migration in cultural evolution, they also highlight the need for more empirical research into the individual-level learning biases that underlie migrant acculturation.


Sign in / Sign up

Export Citation Format

Share Document