scholarly journals The relationship between the female athlete triad and injury rates in collegiate female athletes

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11092
Author(s):  
Mutsuaki Edama ◽  
Hiromi Inaba ◽  
Fumi Hoshino ◽  
Saya Natsui ◽  
Sae Maruyama ◽  
...  

Background This study aimed to clarify the relationship between the triad risk assessment score and the sports injury rate in 116 female college athletes (average age, 19.8 ± 1.3 years) in seven sports at the national level of competition; 67 were teenagers, and 49 were in their 20s. Methods Those with menstrual deficiency for >3 months or <6 menses in 12 months were classified as amenorrheic athletes. Low energy availability was defined as adolescent athletes having a body weight <85% of ideal body weight, and for adult athletes in their 20s, a body mass index ≤17.5 kg/m2. Bone mineral density (BMD) was measured on the heel of the right leg using an ultrasonic bone densitometer. Low BMD was defined as a BMD Z-score <−1.0. The total score for each athlete was calculated. The cumulative risk assessment was defined as follows: low risk (a total score of 0–1), moderate risk (2–5), and high risk (6). The injury survey recorded injuries referring to the injury survey items used by the International Olympic Committee. Results In swimming, significantly more athletes were in the low-risk category than in the moderate and high-risk categories (p = 0.004). In long-distance athletics, significantly more athletes were in the moderate-risk category than in the low and high-risk categories (p = 0.004). In the moderate and high-risk categories, significantly more athletes were in the injury group, whereas significantly more athletes in the low-risk category were in the non-injury group (p = 0.01). Significantly more athletes at moderate and high-risk categories had bone stress fractures and bursitis than athletes at low risk (p = 0.023). Discussion These results suggest that athletes with relative energy deficiency may have an increased injury risk.

10.2196/16069 ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. e16069
Author(s):  
Kenneth B Chapman ◽  
Martijn M Pas ◽  
Diana Abrar ◽  
Wesley Day ◽  
Kris C Vissers ◽  
...  

Background Several pain management guidelines recommend regular urine drug testing (UDT) in patients who are being treated with chronic opioid analgesic therapy (COAT) to monitor compliance and improve safety. Guidelines also recommend more frequent testing in patients who are at high risk of adverse events related to COAT; however, there is no consensus on how to identify high-risk patients or on the testing frequency that should be used. Using previously described clinical risk factors for UDT results that are inconsistent with the prescribed COAT, we developed a web-based tool to adjust drug testing frequency in patients treated with COAT. Objective The objective of this study was to evaluate a risk stratification tool, the UDT Randomizer, to adjust UDT frequency in patients treated with COAT. Methods Patients were stratified using an algorithm based on readily available clinical risk factors into categories of presumed low, moderate, high, and high+ risk of presenting with UDT results inconsistent with the prescribed COAT. The algorithm was integrated in a website to facilitate adoption across practice sites. To test the performance of this algorithm, we performed a retrospective analysis of patients treated with COAT between June 2016 and June 2017. The primary outcome was compliance with the prescribed COAT as defined by UDT results consistent with the prescribed COAT. Results 979 drug tests (867 UDT, 88.6%; 112 oral fluid testing, 11.4%) were performed in 320 patients. An inconsistent drug test result was registered in 76/979 tests (7.8%). The incidences of inconsistent test results across the risk tool categories were 7/160 (4.4%) in the low risk category, 32/349 (9.2%) in the moderate risk category, 28/338 (8.3%) in the high risk category, and 9/132 (6.8%) in the high+ risk category. Generalized estimating equation analysis demonstrated that the moderate risk (odds ratio (OR) 2.1, 95% CI 0.9-5.0; P=.10), high risk (OR 2.0, 95% CI 0.8-5.0; P=.14), and high risk+ (OR 2.0, 95% CI 0.7-5.6; P=.20) categories were associated with a nonsignificantly increased risk of inconsistency vs the low risk category. Conclusions The developed tool stratified patients during individual visits into risk categories of presenting with drug testing results inconsistent with the prescribed COAT; the higher risk categories showed nonsignificantly higher risk compared to the low risk category. Further development of the tool with additional risk factors in a larger cohort may further clarify and enhance its performance.


2019 ◽  
Author(s):  
Kenneth B Chapman ◽  
Martijn M Pas ◽  
Diana Abrar ◽  
Wesley Day ◽  
Kris C Vissers ◽  
...  

BACKGROUND Several pain management guidelines recommend regular urine drug testing (UDT) in patients who are being treated with chronic opioid analgesic therapy (COAT) to monitor compliance and improve safety. Guidelines also recommend more frequent testing in patients who are at high risk of adverse events related to COAT; however, there is no consensus on how to identify high-risk patients or on the testing frequency that should be used. Using previously described clinical risk factors for UDT results that are inconsistent with the prescribed COAT, we developed a web-based tool to adjust drug testing frequency in patients treated with COAT. OBJECTIVE The objective of this study was to evaluate a risk stratification tool, the UDT Randomizer, to adjust UDT frequency in patients treated with COAT. METHODS Patients were stratified using an algorithm based on readily available clinical risk factors into categories of presumed low, moderate, high, and high+ risk of presenting with UDT results inconsistent with the prescribed COAT. The algorithm was integrated in a website to facilitate adoption across practice sites. To test the performance of this algorithm, we performed a retrospective analysis of patients treated with COAT between June 2016 and June 2017. The primary outcome was compliance with the prescribed COAT as defined by UDT results consistent with the prescribed COAT. RESULTS 979 drug tests (867 UDT, 88.6%; 112 oral fluid testing, 11.4%) were performed in 320 patients. An inconsistent drug test result was registered in 76/979 tests (7.8%). The incidences of inconsistent test results across the risk tool categories were 7/160 (4.4%) in the low risk category, 32/349 (9.2%) in the moderate risk category, 28/338 (8.3%) in the high risk category, and 9/132 (6.8%) in the high+ risk category. Generalized estimating equation analysis demonstrated that the moderate risk (odds ratio (OR) 2.1, 95% CI 0.9-5.0; <i>P</i>=.10), high risk (OR 2.0, 95% CI 0.8-5.0; <i>P</i>=.14), and high risk+ (OR 2.0, 95% CI 0.7-5.6; <i>P</i>=.20) categories were associated with a nonsignificantly increased risk of inconsistency vs the low risk category. CONCLUSIONS The developed tool stratified patients during individual visits into risk categories of presenting with drug testing results inconsistent with the prescribed COAT; the higher risk categories showed nonsignificantly higher risk compared to the low risk category. Further development of the tool with additional risk factors in a larger cohort may further clarify and enhance its performance.


Author(s):  
Nazia N. Shaik ◽  
Swapna M. Jaswanth ◽  
Shashikala Manjunatha

Background: Diabetes is one of the largest global health emergencies of the 21st century. As per International Federation of Diabetes some 425 million people worldwide are estimated to have diabetes. The prevalence is higher in urban versus rural (10.2% vs 6.9%). India had 72.9 million people living with diabetes of which, 57.9% remained undiagnosed as per the 2017 data. The objectives of the present study were to identify subjects who at risk of developing Diabetes by using Indian diabetes risk score (IDRS) in the Urban field practice area of Rajarajeswari Medical College and Hospital (RRMCH).Methods: A cross sectional study was conducted using a Standard questionnaire of IDRS on 150 individuals aged ≥20 years residing in the Urban field practice area of RRMCH. The subjects with score <30, 30-50, >or =60 were categorized as having low risk, moderate risk and high risk for developing diabetes type-2 respectively.Results: Out of total 150 participants, 36 (24%) were in high-risk category (IDRS≥60), the majority of participants 61 (41%) were in the moderate-risk category (IDRS 30–50) and 53 (35%) participants were found to be at low-risk (<30) for diabetes. Statistical significant asssociation was found between IDRS and gender, literacy status, body mass index (p<0.0000l).Conclusions: It is essential to implement IDRS which is a simple tool for identifying subjects who are at risk for developing diabetes so that proper intervention can be carried out at the earliest to reduce the burden of diabetes.


2018 ◽  
Vol 55 (4) ◽  
pp. 254-260 ◽  
Author(s):  
Francisca Caimari ◽  
Laura Cristina Hernández-Ramírez ◽  
Mary N Dang ◽  
Plamena Gabrovska ◽  
Donato Iacovazzo ◽  
...  

BackgroundPredictive tools to identify patients at risk for gene mutations related to pituitary adenomas are very helpful in clinical practice. We therefore aimed to develop and validate a reliable risk category system for aryl hydrocarbon receptor-interacting protein (AIP) mutations in patients with pituitary adenomas.MethodsAn international cohort of 2227 subjects were consecutively recruited between 2007 and 2016, including patients with pituitary adenomas (familial and sporadic) and their relatives. All probands (n=1429) were screened for AIP mutations, and those diagnosed with a pituitary adenoma prospectively, as part of their clinical screening (n=24), were excluded from the analysis. Univariate analysis was performed comparing patients with and without AIP mutations. Based on a multivariate logistic regression model, six potential factors were identified for the development of a risk category system, classifying the individual risk into low-risk, moderate-risk and high-risk categories. An internal cross-validation test was used to validate the system.Results1405 patients had a pituitary tumour, of which 43% had a positive family history, 55.5% had somatotrophinomas and 81.5% presented with macroadenoma. Overall, 134 patients had an AIP mutation (9.5%). We identified four independent predictors for the presence of an AIP mutation: age of onset providing an odds ratio (OR) of 14.34 for age 0-18 years, family history (OR 10.85), growth hormone excess (OR 9.74) and large tumour size (OR 4.49). In our cohort, 71% of patients were identified as low risk (<5% risk of AIP mutation), 9.2% as moderate risk and 20% as high risk (≥20% risk). Excellent discrimination (c-statistic=0.87) and internal validation were achieved.ConclusionWe propose a user-friendly risk categorisation system that can reliably group patients into high-risk, moderate-risk and low-risk groups for the presence of AIP mutations, thus providing guidance in identifying patients at high risk of carrying an AIP mutation. This risk score is based on a cohort with high prevalence of AIP mutations and should be applied cautiously in other populations.


Blood ◽  
2015 ◽  
Vol 126 (23) ◽  
pp. 1672-1672
Author(s):  
Meritxell Nomdedeu ◽  
Xavier Calvo ◽  
Dolors Costa ◽  
Montserrat Arnan ◽  
Helena Pomares ◽  
...  

Abstract Introduction: The MDS are a group of clonal hematopoietic disorders characterized by blood cytopenias and increased risk of transformation into acute myeloid leukemia (AML). The MDS predominate in old people (median age at diagnosis > 70 years) so that a fraction of the observed mortality would be driven by age-related factors shared with the general population rather than the MDS. Distinguishing between the MDS-related and unrelated mortality rates will help better assessment of the population health impact of the MDS and more accurate prognostication. This study was aimed at quantifying the MDS-attributable mortality and its relationship with the IPSSR risk categories. Methods: The database of the GESMD was queried for patients diagnosed with primary MDS after 1980 according to the WHO 2001 classification. Patients with CMML, younger than 16 years or who lacked the basic demographic or follow-up data were excluded. Relative survival and MDS-attributable mortality were calculated by the cohort method and statistically compared by Poisson multivariate regression as described by Dickman (Stat Med 2004; 23: 51). Three main parameters were calculated: the observed (all-cause) mortality, the MDS-attributable mortality (both as percentage of the initial cohort), and the fraction of the observed mortality attributed to the MDS. Results: In total, 7408 patients met the inclusion criteria and constitute the basis for this study. Among these patients, 5307 had enough data to be classified according to the IPSSR. Median age was 74 (IQR: 16-99) years and 58 % were males. The most frequent WHO categories were RAEB, type I or II (29% of cases), RCMD (28%), and RA with ring sideroblasts (16%). Most patients (72%) were classified within the very low and low risk categories of the IPSSR. At the study closing date (December 2014), 1022 patients had progressed to AML, 3198 had died (974 after AML) and 3210 were censored alive. The median actuarial survival for the whole series was 4.8 (95% CI: 4.6-5.1) years and 30% of patients are projected to survive longer than 10 years. The overall MDS-attributable mortality at 5 years from diagnosis was 39%, which accounted for three-quarters of the observed mortality (51%, figure). The corresponding figures at 10 years for the MDS-attributable and observed mortality were 55% and 71%, respectively. According to the IPSSR, the 5-year MDS-attributable mortality rates was 19% for the very low risk category, 39% (low risk), 70% (intermediate risk), 78% (high risk), and 92% (very high risk). On average, the incidence rate ratio for the MDS-attributable mortality increased 1.9 times (95% CI: 1.7-2.3, p<0.001) as the IPSSR worsened from one to the next risk category. The fraction of the observed mortality attributed to the MDS was 0.55 for the very low risk category, 0.79 (low risk), 0.93 (intermediate risk), 0.96 (high risk), and 0.99 (very high risk). After distinguishing between AML-related and unrelated mortality, the 5-year MDS-attributable mortality not related to AML was 10% for the very low risk category, 20% (low risk), 33% (intermediate risk), 42% (high risk), and 44% (very high risk). By comparing these figures with the above ones, we could estimate that about 50% of the MDS-attributable mortality was AML-unrelated and that such fraction kept nearly constant across the five IPSSR categories. Conclusions: About three-quarters of the mortality observed in patients with MDS is caused by the disease, the remaining one-quarter being due to MDS-independent factors shared with the general population. The MDS-attributable mortality increases with the IPSSR risk category, from half the observed mortality in the very low risk to nearly all the mortality observed in the high and very high risk groups. Half the MDS-attributable mortality is driven by factors unrelated to leukemic transformation, a proportion that keeps constant across the five IPSSR risk categories. Disclosures Valcarcel: AMGEN: Honoraria, Membership on an entity's Board of Directors or advisory committees, Speakers Bureau; NOVARTIS: Honoraria, Membership on an entity's Board of Directors or advisory committees; GSK: Membership on an entity's Board of Directors or advisory committees, Speakers Bureau; CELGENE: Honoraria, Membership on an entity's Board of Directors or advisory committees, Speakers Bureau. Ramos:AMGEN: Consultancy, Honoraria; NOVARTIS: Consultancy, Honoraria; JANSSEN: Honoraria, Membership on an entity's Board of Directors or advisory committees; CELGENE: Consultancy, Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding. Esteve:Celgene: Consultancy, Honoraria; Janssen: Consultancy, Honoraria.


Author(s):  
Gatot Basuki HM

<em>PT. Prima Alloy Steel Universal merupakan perusahaan industri manufaktur yang memproduksi Velg kendaraan roda empat. Salah satu tahapan proses produksinya yaitu proses casting, Adanya risiko bagi pekerja akan terjadinya kecelakaan kerja sangat tinggi. Tujuan penelitian ini adalah melakukan identifikasi terhadap risiko bekerja di departemen casting dengan pendekatan Job Safety Analysis. sedangkan penilaian risiko serta penanggulangan risiko bahaya bekerja menggunakan metode HIRARC, hasil dari analisis tersebut digunakan untuk melakukan mitigasi terhadap setiap risiko yang terjadi di departemen casting. Hasil penelitian menunjukan bahwa Terdapat 5 aktivitas kerja dan 13 subaktivitas kerja di departemen casting yang mempunyai potensi bahaya bekerja. sedangkan penilaian risiko diperoleh 2 aktivitas kerja dengan potensi bahaya kategori extreme risk, 4 aktivitas kerja kategori high risk, 3 aktivitas kerja kategori moderate risk, 4 aktivitas kerja kategori low risk. Tindakan penanggulangan risiko dilakukan melalui perbaikan standart prosedur kerja (SOP) pada setiap subaktivitas. Subtitusi pada subaktivitas menggunakan alat berat seperti forklif saat mengganti matras motif dan design velg. Rekayasa engineering untuk mempermudah subaktivitas agar miminimalisir terjadinya kecelakaan kerja. Pengendalian administratif terkait penerapan instruksi kerja, memantau pengunaan APD dan APAR serta pelatihan K3 secara berkala. Memberikan tanda peringatan bahaya. Penyediaan APD pada seluruh subaktivitas untuk digunakan sesuai dengan kebutuhan saat melakukan aktivitas bekerja.</em>


Blood ◽  
2015 ◽  
Vol 126 (23) ◽  
pp. 4459-4459 ◽  
Author(s):  
Dr. Muhammad Irfan Khan ◽  
Catriona O'Leary ◽  
Mary Ann Hayes ◽  
Patricia O'Flynn ◽  
Pauline Suzanne Chappell ◽  
...  

Abstract Background Evidence based consensus guidelines for venous thromboembolism (VTE) prevention are broadly accepted to be effective and safe for more than three decades (Clagett GP et al, 1992). However VTE continues to be associated with a major global burden of disease with 3.9 million cases of HAT during one year among 1.1 billion citizens of high income countries (Jha AK et al, 2013). Therefore prevention is the key to reduce death and disability resulting from VTE (Kahn S et al, Gould MK et al & Falck-Yitter Y et al, 2012). Ireland like many other countries has yet to implement a mandatory risk assessment tool and thromboprophylaxis (TP) policy nationally. Aims The aim of this study was to calculate the proportion of inpatients who had a VTE risk assessment performed and received appropriate TP in a large tertiary referral hospital. This information will be vital for baseline data for implementation of a new national policy for prevention of HAT. Methods This audit was performed at Cork University Hospital on 4 pre specified days between November 2014 to February 2015. All adult inpatients (Medical and Surgical) excluding maternity and psychiatric were included. Patients on therapeutic anticoagulation were also excluded. The patients' medical chart and drug prescription chart were reviewed to determine whether or not a VTE risk assessment was documented for each patient and if they had received appropriate TP. If no risk assessment had been performed, trained researchers applied the National Institute for health and Care Excellence (NICE) guidelines 92 (Jan 2010) for VTE risk assessment and prevention. Following the risk assessment patients were divided into three categories, high risk of VTE with low risk of bleeding; high risk of VTE with significant risk of bleeding and low risk of VTE. From this the proportion of patients in each group that received appropriate TP were calculated. Results A total of 1019 patients were enrolled the majority were medical patients 63.5% (n=648). The mean age of patients was 69 years. Females accounted for 52% of patients. Average length of hospitalisation for each patient at the time of the audit was 6 days (range 1-664 days). Overall, a formal TP risk assessment was documented in only 24% (n=244) of all charts reviewed however TP was prescribed in 43.2% (n=441) of patients. See table.Table 1.High Risk of VTE low risk of bleedingHigh risk of VTE significant risk of bleedingLow risk of VTENo. of pts80.3% (n=819)16.6% (n=170)2.9% (n=30)VTE risk assessment documented21.9% (n=180)28.2% (n=55)30% (n=9)Received TP46.3% (n=380)28.8% (n=49)40% (n=12) Within the high risk category patients, 64.3% (n=526) medical. TP was only administered to 46.3% (n=380) of patients in the high risk category. This was almost evenly distributed between surgical 50.1% (n=147) and medical 43.4% (n=233) patients. Conclusion This audit was done as the initial step to develop a national policy to prevent HAT. As suspected, this audit highlights that a large proportion of hospitalised patients, both surgical and medical, continue to be at high risk for VTE despite the availability of preventative measures. There is clear illustration of under prescription of safe, effective and recommended means of VTE prevention. The current overall figure of less than 50% prescription of VTE thromboprophylaxis in high risk patients is a major patient safety concern. There are numerous recognised international guidelines for prevention of VTE, and an efficient method to implement these guidelines needs to be developed. Beyond developing national guidelines for TP, we need a co-ordinated approach to implement and monitor compliance with guidelines. Once the preliminary results of this audit were available to us in March 2015, urgent measures were taken to reduce the identified risk such as the establishment of a Hospital Thrombosis Group which developed a user friendly VTE risk assessment tool and TP policy. The VTE risk assessment tool was incorporated into the patients drug prescription chart and included a pre printed prescription for TP. It is now mandatory for the all patients to have a VTE risk assessment tool and TP prescribed if appropriate within 24hrs of admission. This was successfully piloted for four weeks in the acute medical assessment unit and is now incorporated into each patients drug chart throughout the hospital. This audit will be replicated in 6 months from introduction of this initiative, with an aim of >90% compliance. Disclosures No relevant conflicts of interest to declare.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Dominic McGovern ◽  
Jennifer Lees ◽  
Dana Kidder ◽  
James Smith ◽  
Jamie Traynor ◽  
...  

Abstract Background and Aims Outcomes in ANCA vasculitis remain difficult to predict and therapeutic decision-making can be challenging. We aimed to establish if a renal risk score (RRS) could predict outcomes in this population. Method The Scottish Renal Biopsy Registry is a complete national dataset of all renal biopsies performed in Scotland. Those who had a first renal biopsy between 01/01/2014 and 31/12/2017 with evidence of ANCA vasculitis were included. Demographic data, treatment regimens, episodes of relapse and patient and kidney survival were recorded, retrospectively. The RRS was calculated using the system proposed by Brix et al (1). Each patient was categorised according to % of normal glomeruli (N0 &gt;25%, N1 10 to 25%, N2 &lt;10%), % of tubular atrophy/interstitial fibrosis (T0 ≤25%, T1 &gt;25%) and eGFR (CKD-EPI) at time of biopsy (eGFR: G0 &gt;15 mL/min/1.73 m2, G1 ≤15 mL/min/1.73 m2). Individual scores were summated and patients defined as low, medium or high risk. Cox proportional hazard models were created for survival to ESKD, relapse and death, stratified by risk category. Analyses were conducted using R statistical software. Results Two-hundred and forty-six patients with biopsy proven ANCA vasculitis were identified. Fifty percent (n=123), 46% (n=112) and 5% (n=11) were stratified as low, medium and high risk respectively. Fifty-two percent (n=129) were male and mean age at biopsy was 66.7±12.2 years. This was similar across the risk categories. Mean eGFR was lower in the high-risk category (High risk 8.6±6.1 ‘v’ Low risk 45.7±26.0 ml/min/1.73m2, p&lt;0.001) and proteinuria was higher (High risk 405 (IQR 170-767) ‘v’ Low risk 81 (IQR 41-155) mg/mmol, p&lt;0.001). Thirty-seven percent (n=91) were PR3 antigen positive, 2% (n=5) had dual positivity. In the high risk category, 8 (73%) were PR3 or dual positive. Eighteen (n=7%) patients experienced pulmonary haemorrhage; representation similar across all risk categories. Those categorised as medium or high risk were more likely to receive plasma exchange and/or haemodialysis at presentation (p&lt;0.001) compared with the low risk category. Overall, 16% (n=40) of patients relapsed with a trend to higher risk of relapse in the low risk group (27% of these patients, p=0.05). Thirty seven (15%) patients developed ESKD. Cox proportional hazard model for development of ESKD (Figure 1) shows that those in high risk ‘v’ low risk category were more likely to reach ESKD (HR 124.8, 95% CI 26.4-590.3, p&lt;0.001). Patient survival was similar between risk categories. Conclusion A simple RRS, using routinely reported data, in patients with renal biopsy proven ANCA vasculitis can help to predict development of ESKD. It may also be predictive of future relapse in those with a lower RRS, most likely explained by reduced irreversible damage in this group. The RRS could inform monitoring and treatment decisions. Whilst the numbers are small, a unique strength of this data is that it is based on a complete national dataset making it less susceptible to bias from regional variations in diagnostic and therapeutic practice.


2021 ◽  
Vol 50 (Supplement_1) ◽  
Author(s):  
Naveen Prashar ◽  
Rahuldeep Singh ◽  
Ritin Mohindra ◽  
Vikas Suri ◽  
Ashish Bhalla ◽  
...  

Abstract Background WHO has declared the COVID-19 as Pandemic on 11th March, 2020. It is important to break the chain of transmission by quarantining the persons with high-risk exposure. Understanding the reasons for quarantine will help in reducing the exposures and thus reducing the chances of quarantine. Methods A validated risk assessment tool based on National Centre for Disease Control guidelines was used for the risk assessment of HCWs. The forms of HCWs who underwent risk assessment between April-November, 2020 were analyzed for reasons of quarantine. The positivity rates among high-risk and low-risk groups were compared. Results Out of 1414 HCWs who were assessed, 345 were categorized as high-risk exposure and were quarantined. The most common reasons for quarantine were performance of aerosol generating procedure without recommended personal protection equipment (PPE) (34%), exposure to COVID-19 positive patient without mask for more than 20 minutes at the distance less than 1 m (30%) and having food/tea together (27%). The positivity rate was 8.4% among high-risk and 1.9% among low-risk exposure group (p-value: &lt;0.001). The positivity among low risk category was more in the second half (19/466; 4.1%) as compared to first half (1/603; 0.2%) of the study period. This might be due to exposure from non-hospital sources as second half coincides with first wave of the pandemic. Conclusion Not using recommended PPE and having tea/food breaks together were the most common reasons for quarantine. Key messages Strict enforcement of recommended PPE and scattered tea and food breaks can reduce high-risk exposures.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
I Piras ◽  
G Murenu ◽  
G Piras ◽  
G Pia ◽  
A Azara ◽  
...  

Abstract Background Falls in hospital are adverse events with serious consequences for the patient. Fall risk assessment requires easy tools that are suitable for the specific clinical context. This is important to quickly identify preventing measures. The aim of the study is to identify an appropriate scale for assessing fall risk in patients from an emergency department. Methods For the fall risk assessment in the emergency department, three scales were identified in literature: Kinder 1, MEDFRAT, and Morse. MEDFRAT and Morse classify the patient in high, moderate, and low risk; Kinder 1 split patients “at risk” (also when there is only one positive item) and “non-risk” (in which all items are negative). The study was carried out in July 2019 in an Italian emergency department. Patients who arrived in triage were assessed for the fall risk using the three scales. Results On a sample of 318 patients, the used scales show different levels of fall risk. For Kinder 1, 83.02% is at risk and 16.98% is not at risk; for MEDFRAT, 14.78% is at high risk, 15.09% moderate, and 70.13% low risk; for Morse, 8.81% is at high risk, 35.53% moderate, and 56.66% low risk. As Kinder 1 implies as “high risk” that all items of the questionnaire are positive, to compare Kinder 1 to the other scales with three measurements, we assumed only one positive response as “moderate risk”, all negative responses as “low risk”. Thus, Kinder 1 shows no cases at high risk, 83.02% moderate risk, and 16.98% low risk. All the scales show that the moderate-high risk increases with age. MEDFRAT and Morse have concordant percentages for young (13.6%), elderly (61.2%), and long-lived (66.6%) people. Kinder 1, 59%, 96.7%, and 100%, respectively. Conclusions The comparison between scales shows inhomogeneity in identifying the level of risk. MEDFRAT and Morse appear more reliable and consistent. Key messages An appropriate assessment scale is important to identify the fall risk level. Identifying accurate fall risk levels allows for implementing specific prevention actions.


Sign in / Sign up

Export Citation Format

Share Document