Exploring the Validity and Operational Impact of Using Allied Health Assistants to Conduct Dysphagia Screening for Low-Risk Patients Within the Acute Hospital Setting

2020 ◽  
Vol 29 (4) ◽  
pp. 1944-1955 ◽  
Author(s):  
Maria Schwarz ◽  
Elizabeth C. Ward ◽  
Petrea Cornwell ◽  
Anne Coccetti ◽  
Pamela D'Netto ◽  
...  

Purpose The purpose of this study was to examine (a) the agreement between allied health assistants (AHAs) and speech-language pathologists (SLPs) when completing dysphagia screening for low-risk referrals and at-risk patients under a delegation model and (b) the operational impact of this delegation model. Method All AHAs worked in the adult acute inpatient settings across three hospitals and completed training and competency evaluation prior to conducting independent screening. Screening (pass/fail) was based on results from pre-screening exclusionary questions in combination with a water swallow test and the Eating Assessment Tool. To examine the agreement of AHAs' decision making with SLPs, AHAs ( n = 7) and SLPs ( n = 8) conducted an independent, simultaneous dysphagia screening on 51 adult inpatients classified as low-risk/at-risk referrals. To examine operational impact, AHAs independently completed screening on 48 low-risk/at-risk patients, with subsequent clinical swallow evaluation conducted by an SLP with patients who failed screening. Results Exact agreement between AHAs and SLPs on overall pass/fail screening criteria for the first 51 patients was 100%. Exact agreement for the two tools was 100% for the Eating Assessment Tool and 96% for the water swallow test. In the operational impact phase ( n = 48), 58% of patients failed AHA screening, with only 10% false positives on subjective SLP assessment and nil identified false negatives. Conclusion AHAs demonstrated the ability to reliably conduct dysphagia screening on a cohort of low-risk patients, with a low rate of false negatives. Data support high level of agreement and positive operational impact of using trained AHAs to perform dysphagia screening in low-risk patients.

Author(s):  
Adam C Salisbury ◽  
Amit P Amin ◽  
Karen P Alexander ◽  
Frederick A Masoudi ◽  
Yan Li ◽  
...  

Background: In-hospital bleeding and new onset, hospital acquired anemia (HAA) are both associated with higher mortality in acute myocardial infarction (AMI). Since bleeding is variably defined and often poorly documented, HAA could be a better method to identify at-risk patients, if its prognostic ability were at least as good as documented bleeding. We directly compared the association of HAA and TIMI bleeding with 1-year mortality. Methods: Among 2,803 AMI patients who were not anemic at admission in the 24-center TRIUMPH registry, the presence and severity of HAA and TIMI bleeding were prospectively collected to identify their relative discrimination of 1-year mortality. Logistic regression models, accounting for clustering using generalized estimating equations, were fit for 1) no bleeding, TIMI minimal, minor and major bleeding and 2) no HAA, mild (hemoglobin (Hgb) > 11 g/dl), moderate (Hgb 9 - 11 g/dl) and severe HAA (Hgb < 9 g/dl). Discrimination was compared using c-statistics and reclassification was assessed using the integrated discrimination improvement (IDI), which measures a model's improvement in average sensitivity without sacrificing average specificity vs. another model, and the continuous net reclassification improvement (NRI), to identify the proportion of patients correctly reclassified by the HAA model. Results: HAA was more common (mild: 33%, moderate: 10%, severe 2%) than TIMI bleeding (minimal: 5%, minor: 3%, major 1%). Over 1-year follow-up, 111 patients (4%) died. The HAA model was superior to TIMI bleeding model for 1-year mortality prediction (c-statistic 0.60 vs. 0.51, p<0.001). The IDI of the HAA vs. the bleeding model was 0.009 (95% CI 0.005 - 0.014) and the relative IDI was 0.26 (26% better average discrimination), with a NRI of 0.32 (0.13-0.50) - 17% of patients with events were correctly reclassified to a higher risk while 14% of patients without events were correctly reclassified to a lower risk by the HAA model. Conclusions: HAA is better than TIMI bleeding for identifying 1-year mortality after AMI hospitalization, and may better identify patients without recognized bleeding who are also at risk for poor outcomes. HAA may be useful to identify high-risk patients and as a quality assessment tool.


2019 ◽  
pp. bmjspcare-2019-001828
Author(s):  
Mia Cokljat ◽  
Adam Lloyd ◽  
Scott Clarke ◽  
Anna Crawford ◽  
Gareth Clegg

ObjectivesPatients with indicators for palliative care, such as those with advanced life-limiting conditions, are at risk of futile cardiopulmonary resuscitation (CPR) if they suffer out-of-hospital cardiac arrest (OHCA). Patients at risk of futile CPR could benefit from anticipatory care planning (ACP); however, the proportion of OHCA patients with indicators for palliative care is unknown. This study quantifies the extent of palliative care indicators and risk of CPR futility in OHCA patients.MethodsA retrospective medical record review was performed on all OHCA patients presenting to an emergency department (ED) in Edinburgh, Scotland in 2015. The risk of CPR futility was stratified using the Supportive and Palliative Care Indicators Tool. Patients with 0–2 indicators had a ‘low risk’ of futile CPR; 3–4 indicators had an ‘intermediate risk’; 5+ indicators had a ‘high risk’.ResultsOf the 283 OHCA patients, 12.4% (35) had a high risk of futile CPR, while 16.3% (46) had an intermediate risk and 71.4% (202) had a low risk. 84.0% (68) of intermediate-to-high risk patients were pronounced dead in the ED or ED step-down ward; only 2.5% (2) of these patients survived to discharge.ConclusionsUp to 30% of OHCA patients are being subjected to advanced resuscitation despite having at least three indicators for palliative care. More than 80% of patients with an intermediate-to-high risk of CPR futility are dying soon after conveyance to hospital, suggesting that ACP can benefit some OHCA patients. This study recommends optimising emergency treatment planning to help reduce inappropriate CPR attempts.


2011 ◽  
Vol 93 (5) ◽  
pp. 370-374
Author(s):  
D Veeramootoo ◽  
L Harrower ◽  
R Saunders ◽  
D Robinson ◽  
WB Campbell

INTRODUCTION Venous thromboembolism (VTE) prophylaxis has become a major issue for surgeons both in the UK and worldwide. Sev-eral different sources of guidance on VTE prophylaxis are available but these differ in design and detail. METHODS Two similar audits were performed, one year apart, on the VTE prophylaxis prescribed for all general surgical inpatients during a single week (90 patients and 101 patients). Classification of patients into different risk groups and compliance in prescribing prophylaxis were examined using different international, national and local guidelines. RESULTS There were significant differences between the numbers of patients in high, moderate and low-risk groups according to the different guidelines. When groups were combined to indicate simply ‘at risk’ or ‘not at risk’ (in the manner of one of the guidelines), then differences were not significant. Our compliance improved from the first audit to the second. Patients at high risk received VTE prophylaxis according to guidance more consistently than those at low risk. CONCLUSIONS Differences in guidance on VTE prophylaxis can affect compliance significantly when auditing practice, depending on the choice of ‘gold standard’. National guidance does not remove the need for clear and detailed local policies. Making decisions about policies for lower-risk patients can be more difficult than for those at high risk.


2018 ◽  
Vol 14 (2) ◽  
pp. 131 ◽  
Author(s):  
Anna D. Coutinho, BPharm, PhD ◽  
Kavita Gandhi, BPharm, MS ◽  
Rupali M. Fuldeore, BAMS, MS ◽  
Pamela B. Landsman-Blumberg, MPH, DrPH ◽  
Sanjay Gandhi, PhD

Objective: Identify opioid abuse risk factors among chronic noncancer pain (CNCP) patients receiving long-term opioid therapy and assess healthcare resource use (HRU) among patients at elevated abuse risk.Design: Data were obtained from an integrated administrative claims database. Classification and Regression Tree (CART) analysis identified risk factors potentially predictive of opioid abuse, which were used to classify the overall population into cohorts defined by levels of abuse risk. Multivariable logistic regression compared HRU across risk cohorts.Setting: Retrospective cohort study.Patients, participants: 21,072 patients aged ≥18 years diagnosed with ≥1 of 5 types of CNCP and a prescription for Schedule II or III/IV opioid medication used long-term (≥90 days).Main outcome measures: (1) Opioid abuse risk factors; (2) HRU differences between risk cohorts.Results: CART analysis identified four groups at elevated opioid abuse risk defined by three factors (age, daily opioid dose, and total days’ supply of opioids); sensitivity: 70.3 percent, specificity: 74.1 percent, and positive predictive value: 5.6 percent. The analysis results were used to classify patients into low-risk (72.5 percent), at-risk (25.4 percent), and opioid-abuser (2.2 percent) cohorts. In multivariable analysis, emergency department (ED) use was higher among at-risk vs low-risk patients (odds ratio [OR]: 1.14; p < 0.05); hospitalization and ED visits were higher for opioid-abusers vs low-risk patients (OR: 2.33 and 2.14, respectively; p < 0.05).Conclusions: This study identifies a subpopulation of CNCP patients at risk of opioid abuse. However, limited sensitivity and specificity of criteria defining this subpopulation reinforce the importance of physician discretion in patient-level treatment decisions.


2020 ◽  
Author(s):  
Adnan I Qureshi

Background and Purpose There is increasing recognition of a relatively high burden of pre-existing cardiovascular disease in Corona Virus Disease 2019 (COVID 19) infected patients. We determined the burden of pre-existing cardiovascular disease in persons residing in United States (US) who are at risk for severe COVID-19 infection. Methods Age (60 years or greater), presence of chronic obstructive pulmonary disease, diabetes, mellitus, hypertension, and/or malignancy were used to identify persons at risk for admission to intensive care unit, or invasive ventilation, or death with COVID-19 infection. Persons were classified as low risk (no risk factors), moderate risk (1 risk factor), and high risk (two or more risk factors present) using nationally representative sample of US adults from National Health and Nutrition Examination Survey 2017 and 2018 survey. Results Among a total of 5856 participants, 2386 (40.7%) were considered low risk, 1325 (22.6%) moderate risk, and 2145 persons (36.6%) as high risk for severe COVID-19 infection. The proportion of patients who had pre-existing stroke increased from 0.6% to 10.5% in low risk patients to high risk patients (odds ratio [OR]19.9, 95% confidence interval [CI]11.6-34.3). The proportion of who had pre-existing myocardial infection (MI) increased from 0.4% to 10.4% in low risk patients to high risk patients (OR 30.6, 95% CI 15.7-59.8). Conclusions A large proportion of persons in US who are at risk for developing severe COVID 19 infection are expected to have pre-existing cardiovascular disease. Further studies need to identify whether targeted strategies towards cardiovascular diseases can reduce the mortality in COVID-19 infected patients.


2019 ◽  
Vol 37 (4_suppl) ◽  
pp. 487-487 ◽  
Author(s):  
Jerome Galon ◽  
Fabienne Hermitte ◽  
Bernhard Mlecnik ◽  
Florence Marliot ◽  
Carlo Bruno Bifulco ◽  
...  

487 Background: Immunoscore Colon is an IVD test predicting the risk of relapse in early-stage colon cancer (CC) patients, by measuring the host immune response at the tumor site. It is a risk-assessment tool providing independent and superior prognostic value than the usual tumor risk parameters and is intended to be used as an adjunct to the TNM classification. Risk assessment is particularly important to decide when to propose an adjuvant (adj.) treatment for stage (St) II CC patients. High-risk stage II patients defined as those with poor prognostic features including T4, lymph nodes < 12, poor differentiation, VELIPI, bowel obstruction/perforation can be considered for adj. chemotherapy (CT). However, additional risk factors are needed to guide treatment decisions. Methods: A subgroup analysis was performed on the St II untreated patients (n = 1130) from the Immunoscore international validation study (Pagès The Lancet 2018). The high-risk patients (with at least 1 clinico-pathological high-risk feature) were classified in 2 categories using pre-defined cutoffs: Low Immunoscore versus High Immunoscore and their five-year time to recurrence (5Y TTR) was compared to the TTR of the low-risk patients (without any clinico-pathological high-risk feature). Results: Among the patients with high-risk features (n = 630), 438 (69.5%) had a High Immunoscore with a corresponding 5Y TTR of 87.4 (95% CI 83.9-91.0), statistically similar (logrank pv not stratified p > 0.42, wald pv stratified by center p > 0.20) to the TTR 89.1 (95% CI 86.1-92.1) observed for the 500 low-risk patients (with no clinico-pathological feature). Furthermore, 5Y TTR for these patients were statistically similar to those of St II patients with high-risk features and a High Immunoscore (n = 438), who received adj. CT (n = 162) (5Y TTR of 83.4 (95% CI 77.6-89.9). Conclusions: These data show that despite the presence of high-risk features that usually trigger adj. treatment, when not treated with CT, a significant part of these patients (69.5%) have a recurrence risk similar to the low risk patients. Therefore, the Immunoscore test could be a good tool for adj. treatment decision in St II patients.


Author(s):  
Ankitkumar K Patel ◽  
Rajesh M Kabadi ◽  
Rajani Sharma ◽  
Rita Schmidt ◽  
Elias Iliadis

Background: Lower extremity peripheral artery disease (PAD) is a common syndrome that afflicts many individuals and leads to significant morbidity. The American College of Cardiology (ACC)/American Heart Association (AHA) Guidelines for the Management of Peripheral Artery Disease (PAD) (JACC, 2006) outlines four clinical symptoms (claudication, walking impairment, exertional leg complaints and poorly healing wounds) that should be asked to at risk patients. Outpatient cardiology practices often take care of individuals at risk for PAD and have the opportunity to screen and improve quality of medical care in accordance with professional guidelines. Methods: A group of 367 outpatients seen in a large academic cardiology practice from September 2011 underwent chart review. Risk factors for PAD that were assessed include history of smoking, hypertension, diabetes, hyperlipidemia, homocysteine levels, and CRP. Those that had three or more risk factors or a previous diagnosis of known PAD were classified as high risk and those with less than 2 risk factors were classified as low risk. Documentation of whether clinical symptoms were asked was obtained from outpatient chart. Fisher exact test was utilized for statistical analysis. Results: Fifty-seven percent (N=208) of our population were classified as high risk for PAD and forty-three percent (N=158) were low risk. Table 1 below shows assessment of clinical symptoms in high and low risk patients. Conclusions: Though both high risk and low risk PAD patients are assessed at equivalent rates for clinical symptoms, the vast majority of patients overall are underassessed. Lack of knowledge of clinical symptoms can lead to underscreening of PAD and thus undertreatment. Increasing clinical symptom screening in the outpatient cardiology setting can lead to quality improvement and adherence to ACC/AHA Guidelines.


2018 ◽  
Vol 20 (3) ◽  
pp. 208-215
Author(s):  
Donald Stewart ◽  
John Kinsella ◽  
Joanne McPeake ◽  
Tara Quasim ◽  
Alex Puxty

Purpose Patients with alcohol-related disease constitute an increasing proportion of those admitted to intensive care unit. There is currently limited evidence regarding the impact of alcohol use on levels of agitation, delirium and sedative requirements in intensive care unit. This study aimed to determine whether intensive care unit-admitted alcohol-abuse patients have different sedative requirements, agitation and delirium levels compared to patients with no alcohol issues. Methods This retrospective analysis of a prospectively acquired database (June 2012–May 2013) included 257 patients. Subjects were stratified into three risk categories: alcohol dependency (n = 69), at risk (n = 60) and low risk (n = 128) according to Fast Alcohol Screening Test scores and World Health Organisation criteria for alcohol-related disease. Data on agitation and delirium were collected using validated retrospective chart-screening methods and sedation data were extracted and then log-transformed to fit the regression model. Results Incidence of agitation (p = 0.034) and delirium (p = 0.041) was significantly higher amongst alcohol-dependent patients compared to low-risk patients as was likelihood of adverse events (p = 0.007). In contrast, at-risk patients were at no higher risk of these outcomes compared to the low-risk group. Alcohol-dependent patients experienced suboptimal sedation levels more frequently and received a wider range of sedatives (p = 0.019) but did not receive higher daily doses of any sedatives. Conclusions Our analysis demonstrates that when admitted to intensive care unit, it is those who abuse alcohol most severely, alcohol-dependent patients, rather than at-risk drinkers who have a significantly increased risk of agitation, delirium and suboptimal sedation. These patients may require closer assessment and monitoring for these outcomes whilst admitted.


QJM ◽  
2019 ◽  
Vol 113 (1) ◽  
pp. 20-24
Author(s):  
J Mizrahi ◽  
J Kott ◽  
E Taub ◽  
N Goolsarran

Summary Background The Modified Early Warning System (MEWS) is a well-validated tool used by hospitals to identify patients at high risk for an adverse event to occur. However, there has been little evaluation into whether a low MEWS score can be predictive of patients with a low likelihood of an adverse event. Aim The present study aims to evaluate the MEWS score as a method of identifying patients at low risk for adverse events. Design Retrospective cohort study of 5676 patient days and analysis of associated MEWS scores, medical comorbidities and adverse events. The primary outcome was the association of average daily MEWS scores in those who had an adverse event compared with those who did not. Results Those with an average MEWS score of &gt;2 were over 9 times more likely to have an adverse event compared with those with an average MEWS score of 1–2, and over 15 times more likely to have an adverse event compared to those with an average MEWS score of &lt;1. Conclusions Our study shows that those with average daily MEWS scores &lt;2 are at a significantly lower likelihood of having an adverse event compared with a score of &gt;2, deeming them ‘low-risk patients’. Formal recognition of such patients can have major implications in a hospital setting, including more efficient resource allocation in hospitals and better patient satisfaction and safety by adjusting patient monitoring according to their individual risk profile.


2021 ◽  
Vol 7 ◽  
Author(s):  
Adam Bernstein ◽  
Randall Moore ◽  
Lauren Rhee ◽  
Dina Aronson ◽  
David Katz

Malnutrition is common among hospitalized patients and associated with longer hospital stays, higher rates of rehospitalization, and increased mortality. Validated questionnaires of varying sensitivity and specificity to help identify patients at risk of malnutrition have been developed, but none has been broadly adopted. Tools to identify patients at risk for malnutrition should be quick, inexpensive, easy to administer and use, not require specialized nutrition knowledge, and provide results which can be entered into an electronic medical record; ideally, the tool should be deployed within 24 hours of admission and repeated if warranted. We hypothesize that a novel digital nutrition assessment tool which uses the Diet Quality Photo Navigation (DQPN) method, can help triage hospitalized patients toward further evaluation of nutritional status. We further propose that micronutrient deficiencies may be identified at the same time as malnutrition and that the reimbursement and cost savings from DQPN will prove substantially greater than the combined costs of its use and triggered dietitian consult. Deploying DQPN upon admission will represent an addition to standard hospital intake procedure that is frictionless for patients and health professionals, and one which may be overseen by clerical rather than clinical staff. The digital format of DQPN, which can be integrated into electronic medical records, will facilitate easier tracking and management of nutritional status over the course of hospitalization and post-discharge. To evaluate the hypotheses, DQPN will be deployed in a hospital setting to a group of patients who will also be seen by a registered dietitian to assess the nutritional status of each patient. Receiver operating characteristic curves will determine the point, or criterion, at which maximal true positivity rate and least false positivity rate for a diagnosis of malnutrition and specific nutrient deficiencies align. The study cohort will also be compared to a matched historical cohort to compare total medical spend and reimbursement between the intervention cohort and matched control. Testing of these hypotheses will thus allow for insight into whether DQPN may be used to identify malnutrition and nutrient deficiencies in hospitalized patients and, in so doing, improve patient outcomes, reduce healthcare utilization, and bring financial benefit to hospitals.


Sign in / Sign up

Export Citation Format

Share Document