high event
Recently Published Documents


TOTAL DOCUMENTS

66
(FIVE YEARS 22)

H-INDEX

9
(FIVE YEARS 3)

2021 ◽  
Vol 23 (Supplement_G) ◽  
Author(s):  
Denis Leonardi ◽  
Valentina Siviero ◽  
Martina Setti ◽  
Caterina Maffeis ◽  
Diego Fanti ◽  
...  

Abstract Aims Tricuspid Regurgitation (TR) is quite frequent in the community and often overlooked in routine clinical practice. This study aims to convey the TR rate of diagnosis and impact on survival in a geographically defined population of an Italian referral centre, considering five different clinical contexts. Methods The study included consecutive outpatients with comprehensive echocardiography and complete clinical evaluation over 7 years of practice. Outpatients with TR greater than moderate were included, and the different clinical contexts evaluated: patients with concomitant significant mitral regurgitation (MR-TR), heart failure (HF-TR), previous open-heart surgery (postop-TR), pulmonary hypertension (PHTN-TR) and isolated TR (isolated-TR). Results Among all consecutive echocardiograms performed in routine practice (N=6797) in a geographically defined community, moderate or severe TR was found in 4.8% (N = 327; mean age 76±10, 56% female). Median follow-up was 6.1 [2.2–8.9] years. TR severity was an independent determinant of survival: risk ratio for mortality of severe TR vs. moderate was 1.72 [95% CI 1.06–2.77; P = 0.03] univariate and 1.76 [95% CI 1.02–3.01; P = 0.04] after adjusted for age, sex, MR, PHTN and EF. Only 2.8% of patients underwent tricuspid valve surgery during follow-up. Outpatients with MR-TR or HF-TR held the worst prognosis (Figure). As compared to isolated-TR, the mortality risk was 2.67 [95% CI 1.05–6.78; P = 0.04] for HF-TR and 2.04 [95% CI 1.00–4.14; P = 0.05] for MR-TR. Risk ratios for mortality vs. postop-TR were 3.66 [95% CI 1.19–11.26; P= 0.02] for HF-TR and 2.79 [95% CI 1.08–7.21; P = 0.03] for MR-TR. There was no interaction between the TR clinical context and the survival impact of TR (P=0.09). Conclusions Significant TR is frequent in our community, comparable to key epidemiological studies. TR severity independently impacts survival in all clinical settings, and it is associated with an absolute high event-rate when present with concomitant MR or HF. These results give importance to early diagnosis with grading to be performed through accurate echocardiography and renew the interest in new and safe, less invasive percutaneous intervention to improve patients' survival.


2021 ◽  
Vol 35 (6) ◽  
pp. 1091-1103
Author(s):  
Jinhuan Zhu ◽  
Libo Zhou ◽  
Han Zou ◽  
Peng Li ◽  
Fei Li ◽  
...  

BMC Medicine ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Michael K. Sullivan ◽  
Bhautesh Dinesh Jani ◽  
Alex McConnachie ◽  
Peter Hanlon ◽  
Philip McLoone ◽  
...  

Abstract Background Chronic kidney disease (CKD) typically co-exists with multimorbidity (presence of 2 or more long-term conditions: LTCs). The associations between CKD, multimorbidity and hospitalisation rates are not known. The aim of this study was to examine hospitalisation rates in people with multimorbidity with and without CKD. Amongst people with CKD, the aim was to identify risk factors for hospitalisation. Methods Two cohorts were studied in parallel: UK Biobank (a prospective research study: 2006-2020) and Secure Anonymised Information Linkage Databank (SAIL: a routine care database, Wales, UK: 2011-2018). Adults were included if their kidney function was measured at baseline. Nine categories of participants were used: zero LTCs; one, two, three and four or more LTCs excluding CKD; and one, two, three and four or more LTCs including CKD. Emergency hospitalisation events were obtained from linked hospital records. Results Amongst 469,339 UK Biobank participants, those without CKD had a median of 1 LTC and those with CKD had a median of 3 LTCs. Amongst 1,620,490 SAIL participants, those without CKD had a median of 1 LTC and those with CKD had a median of 5 LTCs. Compared to those with zero LTCs, participants with four or more LTCs (excluding CKD) had high event rates (rate ratios UK Biobank 4.95 (95% confidence interval 4.82–5.08)/SAIL 3.77 (3.71–3.82)) with higher rates if CKD was one of the LTCs (rate ratios UK Biobank 7.83 (7.42–8.25)/SAIL 9.92 (9.75–10.09)). Amongst people with CKD, risk factors for hospitalisation were advanced CKD, age over 60, multiple cardiometabolic LTCs, combined physical and mental LTCs and complex patterns of multimorbidity (LTCs in three or more body systems). Conclusions People with multimorbidity have high rates of hospitalisation. Importantly, the rates are two to three times higher when CKD is one of the multimorbid conditions. Further research is needed into the mechanism underpinning this to inform strategies to prevent hospitalisation in this very high-risk group.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258276
Author(s):  
Steven R. Steinhubl ◽  
Jill Waalen ◽  
Anirudh Sanyal ◽  
Alison M. Edwards ◽  
Lauren M. Ariniello ◽  
...  

Background Atrial fibrillation (AF) is common, often without symptoms, and is an independent risk factor for mortality, stroke and heart failure. It is unknown if screening asymptomatic individuals for AF can improve clinical outcomes. Methods mSToPS was a pragmatic, direct-to-participant trial that randomized individuals from a single US-wide health plan to either immediate or delayed screening using a continuous-recording ECG patch to be worn for two weeks and 2 occasions, ~3 months apart, to potentially detect undiagnosed AF. The 3-year outcomes component of the trial was designed to compare clinical outcomes in the combined cohort of 1718 individuals who underwent monitoring and 3371 matched observational controls. The prespecified primary outcome was the time to first event of the combined endpoint of death, stroke, systemic embolism, or myocardial infarction among individuals with a new AF diagnosis, which was hypothesized to be the same in the two cohorts but was not realized. Results Over the 3 years following the initiation of screening (mean follow-up 29 months), AF was newly diagnosed in 11.4% (n = 196) of screened participants versus 7.7% (n = 261) of observational controls (p<0.01). Among the screened cohort with incident AF, one-third were diagnosed through screening. For all individuals whose AF was first diagnosed clinically, a clinical event was common in the 4 weeks surrounding that diagnosis: 6.6% experienced a stroke,10.2% were newly diagnosed with heart failure, 9.2% had a myocardial infarction, and 1.5% systemic emboli. Cumulatively, 42.9% were hospitalized. For those diagnosed via screening, none experienced a stroke, myocardial infarction or systemic emboli in the period surrounding their AF diagnosis, and only 1 person (2.3%) had a new diagnosis of heart failure. Incidence rate of the prespecified combined primary endpoint was 3.6 per 100 person-years among the actively monitored cohort and 4.5 per 100 person-years in the observational controls. Conclusions At 3 years, screening for AF was associated with a lower rate of clinical events and improved outcomes relative to a matched cohort, although the influence of earlier diagnosis of AF via screening on this finding is unclear. These observational data, including the high event rate surrounding a new clinical diagnosis of AF, support the need for randomized trials to determine whether screening for AF will yield a meaningful protection from strokes and other clinical events. Trail registration The mHealth Screening To Prevent Strokes (mSToPS) Trial is registered on ClinicalTrials.gov with the identifier NCT02506244.


Author(s):  
Kim Stanford ◽  
Frances Tran ◽  
Peipei Zhang ◽  
Xianqin Yang

Despite the importance of biofilm formation in contamination of meat by pathogenic Escherichia coli at slaughter plants, drivers for biofilm remain unclear. To identify selection pressures for biofilm, we evaluated 745 isolates from cattle and 700 generic E. coli from two beef slaughter plants for motility, expression of curli and cellulose, and biofilm-forming potential. Cattle isolates were also screened for serogroup, stx1 , stx2 , eae and rpoS. Generic E. coli were compared by source (hide of carcass, hide-off carcass, processing equipment) before and after implementation of antimicrobial hurdles. The proportion of E. coli capable of forming biofilms was lowest (7.1%; P < 0.05) for cattle isolates and highest (87.3%; P < 0.05) from equipment. Only one enterohemorrhagic E. coli (EHEC) was an extremely-strong biofilm-former, in contrast to 73.4% of E. coli from equipment. Isolates from equipment after sanitation had a greater biofilm-forming capacity ( P < 0.001) than those before sanitation. Most cattle isolates were motile and expressed curli, although these traits along with expression of cellulose and detection of rpoS were not necessary for biofilm formation. In contrast, isolates capable of forming biofilms on equipment were almost exclusively motile and able to express curli. Results of the present study indicate that cattle would rarely carry EHEC capable of making strong biofilms to slaughter plants. However, if biofilm-forming EHEC contaminated equipment, current sanitation procedures may not eliminate the most robust biofilm-forming strains. Accordingly, new and effective anti-biofilm hurdles are required for meat-processing equipment to reduce future instances of food-borne disease. Importance As the majority of enterohemorrhagic E. coli (EHEC) are not capable of forming biofilms, sources were undetermined of the biofilm-forming EHEC isolated from ‘high-event periods’ in beef slaughter plants. This study demonstrated that sanitation procedures used on beef-processing equipment may inadvertently lead to survival of robust biofilm-forming strains of E. coli . Cattle only rarely carry EHEC capable of forming strong biofilms (1/745 isolates evaluated), but isolates with greater biofilm-forming capacity were more likely ( P < 0.001) to survive equipment sanitation. In contrast, chilling carcasses for 3 days at 0°C reduced ( P < 0.05) the proportion of biofilm-forming E. coli . Consequently, an additional anti-biofilm hurdle for meat-processing equipment, perhaps involving cold exposure, is necessary to further reduce the risk of food-borne disease.


Blood ◽  
2021 ◽  
Author(s):  
Johann K. Hitzler ◽  
Todd Alonzo ◽  
Robert B Gerbing ◽  
Amy Beckman ◽  
Betsy Hirsch ◽  
...  

Myeloid leukemia in children with Down syndrome (ML-DS) is associated with young age and somatic GATA1 mutations. Due to high event-free survival (EFS) and hypersensitivity of the leukemic blasts to chemotherapy, the prior Children's Oncology Group protocol ML-DS protocol (AAML0431), reduced overall treatment intensity but lacking risk stratification, retained the high-dose cytarabine course (HD-AraC), which was highly associated with infectious morbidity. Despite high EFS of ML-DS, survival for those who relapse is rare. AAML1531 introduced therapeutic risk stratification based on the previously identified prognostic factor, measurable residual disease (MRD) at the end of the first induction course. Standard risk (SR) patients were identified by negative MRD using flow cytometry (&lt;0.05%) and did not receive the historically administered HD-AraC course. Interim analysis of 114 SR patients revealed a 2-year EFS of 85.6% (95% confidence interval (CI), 75.7-95.5%), which was significantly lower than for MRD-negative patients treated with HD-AraC on AAML0431 (p=0.0002). Overall survival at 2 years was 91.0% (95% CI 83.8%-95.0%). Twelve SR patients relapsed, mostly within one year from study entry and had a 1-year OS of 16.7% (95% CI 2.7% - 41.3%). Complex karyotypes were more frequent in SR patients who relapsed compared to those who did not (36% vs. 9%; p=0.0248). MRD by error-corrected sequencing of GATA1 mutations was piloted in 18 SR patients and detectable in 60% who relapsed vs. 23% who did not (p=0.2682). Patients with SR ML-DS had worse outcomes without HD-AraC after risk classification based on flow cytometric MRD. ClinicalTrials.gov NCT02521493


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253130
Author(s):  
Nina Heins ◽  
Jennifer Pomp ◽  
Daniel S. Kluger ◽  
Stefan Vinbrüx ◽  
Ima Trempler ◽  
...  

Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds–hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.


Atmosphere ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 567
Author(s):  
Zuohao Cao ◽  
Huaqing Cai ◽  
Guang J. Zhang

Even with ever-increasing societal interest in tornado activities engendering catastrophes of loss of life and property damage, the long-term change in the geographic location and environment of tornado activity centers over the last six decades (1954–2018), and its relationship with climate warming in the U.S., is still unknown or not robustly proved scientifically. Utilizing discriminant analysis, we show a statistically significant geographic shift of U.S. tornado activity center (i.e., Tornado Alley) under warming conditions, and we identify five major areas of tornado activity in the new Tornado Alley that were not identified previously. By contrasting warm versus cold years, we demonstrate that the shift of relative warm centers is coupled with the shifts in low pressure and tornado activity centers. The warm and moist air carried by low-level flow from the Gulf of Mexico combined with upward motion acts to fuel convection over the tornado activity centers. Employing composite analyses using high resolution reanalysis data, we further demonstrate that high tornado activities in the U.S. are associated with stronger cyclonic circulation and baroclinicity than low tornado activities, and the high tornado activities are coupled with stronger low-level wind shear, stronger upward motion, and higher convective available potential energy (CAPE) than low tornado activities. The composite differences between high-event and low-event years of tornado activity are identified for the first time in terms of wind shear, upward motion, CAPE, cyclonic circulation and baroclinicity, although some of these environmental variables favorable for tornado development have been discussed in previous studies.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dawn M. Nekorchuk ◽  
Teklehaimanot Gebrehiwot ◽  
Mastewal Lake ◽  
Worku Awoke ◽  
Abere Mihretie ◽  
...  

Abstract Background Despite remarkable progress in the reduction of malaria incidence, this disease remains a public health threat to a significant portion of the world’s population. Surveillance, combined with early detection algorithms, can be an effective intervention strategy to inform timely public health responses to potential outbreaks. Our main objective was to compare the potential for detecting malaria outbreaks by selected event detection methods. Methods We used historical surveillance data with weekly counts of confirmed Plasmodium falciparum (including mixed) cases from the Amhara region of Ethiopia, where there was a resurgence of malaria in 2019 following several years of declining cases. We evaluated three methods for early detection of the 2019 malaria events: 1) the Centers for Disease Prevention and Control (CDC) Early Aberration Reporting System (EARS), 2) methods based on weekly statistical thresholds, including the WHO and Cullen methods, and 3) the Farrington methods. Results All of the methods evaluated performed better than a naïve random alarm generator. We also found distinct trade-offs between the percent of events detected and the percent of true positive alarms. CDC EARS and weekly statistical threshold methods had high event sensitivities (80–100% CDC; 57–100% weekly statistical) and low to moderate alarm specificities (25–40% CDC; 16–61% weekly statistical). Farrington variants had a wide range of scores (20–100% sensitivities; 16–100% specificities) and could achieve various balances between sensitivity and specificity. Conclusions Of the methods tested, we found that the Farrington improved method was most effective at maximizing both the percent of events detected and true positive alarms for our dataset (> 70% sensitivity and > 70% specificity). This method uses statistical models to establish thresholds while controlling for seasonality and multi-year trends, and we suggest that it and other model-based approaches should be considered more broadly for malaria early detection.


2021 ◽  
Vol 78 ◽  
pp. 1-18
Author(s):  
J. S. Silva ◽  
E. Lenza ◽  
A. L. C. Moreira ◽  
C. E. B. Proença

The Phenological Predictability Index (PPI) is an algorithm incorporated into Brahms, one of the most widely used herbarium database management systems. PPI uses herbarium specimen data to calculate the probability of the occurrence of various phenological events in the field. Our hypothesis was that use of PPI to quantify the likelihood that a given species will be found in flower bud, flower or fruit in a particular area in a specific period makes field expeditions more successful in terms of finding fertile plants. PPI was applied to herbarium data for various angiosperm species locally abundant in Central Brazil to determine the month in which they were most likely to be found, in each of five areas of the Distrito Federal, with flower buds, flowers or fruits (i.e. the ‘maximum probability month’ for each of these phenophases). Plants of the selected species growing along randomised transects were tagged and their phenology was monitored over 12 months (method 1), and two one-day field excursions to each area were undertaken, by botanists with no prior knowledge of whether the species had previously been recorded at these sites, to record their phenological state (method 2). The results showed that field excursions in the PPI-determined maximum probability month for flower buds, flowers or fruits would be expected to result in a > 90% likelihood of finding individual plants of a given species in each of these phenophases. PPI may fail to predict phenophase for species with supra-annual reproductive events or with high event contingency. For bimodal species, the PPI-determined maximum probability month is that in which a specific phenophase is likely to be most intense. In planning an all-purpose collecting trip to an area with seasonal plant fertility, PPI scores are useful when selecting the best month for travel.


Sign in / Sign up

Export Citation Format

Share Document