risk stratification tool
Recently Published Documents


TOTAL DOCUMENTS

247
(FIVE YEARS 113)

H-INDEX

20
(FIVE YEARS 6)

Author(s):  
Andrew R Melville ◽  
Karen Donaldson ◽  
James Dale ◽  
Anna Ciechomska

Abstract Objective To externally validate the Southend GCA Probability Score (GCAPS) in patients attending a GCA Fast-Track Pathway (GCA FTP) in NHS Lanarkshire. Methods Consecutive GCA FTP patients between November 2018 and December 2020 underwent GCAPS assessment as part of routine care. GCA diagnoses were supported by USS +/- TAB and confirmed at 6 months. Percentages of patients with GCA according to GCAPS risk group, performance of total GCAPS in distinguishing GCA/non-GCA final diagnoses, and test characteristics using different GCAPS binary cut-offs, were assessed. Associations between individual GCAPS components and GCA, and the value of USS and TAB in the diagnostic process, were also explored. Results 44/129 patients were diagnosed with GCA, including 0/41 GCAPS low risk patients (GCAPS <9), 3/40 medium risk (GCAPS 9–12), and 41/48 high risk (GCAPS >12). Overall performance of GCAPS in distinguishing GCA/non-GCA was excellent [ROC AUC 0.976 (95% CI 0.954–0.999)]. GCAPS cut-off ≥10 had 100.0% sensitivity and 67.1% specificity for GCA. GCAPS cut-off ≥13 had highest accuracy (91.5%), with 93.2% sensitivity and 90.6% specificity. Several individual GCAPS components were associated with GCA. Sensitivity of USS increased by ascending GCAPS risk group (nil, 33.3%, 90.2% respectively). TAB was diagnostically useful in cases where USS was inconclusive. Conclusion This is the first published study describing application of GCAPS outside the specialist centre where it was developed. Performance of GCAPS as a risk stratification tool was excellent. GCAPS may have additional value for screening GCA FTP referrals and guiding empirical glucocorticoid treatment.


2021 ◽  
Author(s):  
Geneviève Rouleau ◽  
Venkatesh Thiruganasambandamoorthy ◽  
Kelly Wu ◽  
Bahareh Ghaedi ◽  
Phuong Anh NGuyen ◽  
...  

Abstract Background: The Canadian Syncope Risk Score (CSRS) is a validated risk stratification tool developed to optimize the accuracy of emergency department decisions and inform evidence-based clinical actions. While preliminary work has been undertaken to understand the barriers to CSRS use, no work to date has explored how to implement the CSRS to overcome these barriers in practice. This study aimed to identify which implementation strategies are most appropriate to address these barriers and how they should be implemented to mitigate the possibility of poor uptake. Methods: We conducted a series of three iterative online user-centered design workshops with emergency medicine physicians from three hospital sites in Ontario, Canada. The objective of the workshops was to engage participants in identifying acceptable strategies to promote CSRS uptake and how they should be operationalized. To support this, we systematically mapped previously identified barriers to corresponding behaviour change techniques to identify the most likely strategies to effect change. The sessions were audio-recorded and dialogue relating directly to the study objective were transcribed. We performed a qualitative content data analysis according to pre-defined objectives for each workshop. Results: Fourteen physicians participated across the three workshops. The main implementation strategies identified to overcome identified barriers were: education in the format of meetings, videos, journal clubs, and posters (uncertainty around when and how to apply the CSRS); an online calculator and integration of the CSRS into electronic medical record (uncertainty in how to apply the CSRS), local champion (lack of team buy-in); and dissemination of evidence summaries and feedback through email communications (lack of evidence about impact). Conclusions: The ability of the CSRS to effectively improve patient safety and syncope management relies on broad buy-in and uptake across physicians. To ensure the CSRS is well-positioned for impact, a comprehensive suite of implementation strategies was identified to address known barriers. This next phase of work will provide insight into whether these strategies facilitated better alignment with barriers, higher physician engagement with the implementation strategies, and broader uptake of the CSRS, with the objective of improving the likelihood that the CSRS will positively influence patient outcomes.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 595-595
Author(s):  
Jennifer Woodward ◽  
Tru Byrnes

Abstract Delirium is a disturbance of attention accompanied by a change in baseline cognition that is commonly seen in acute care settings, and effects up to 80% of ICU patients. The development of delirium has adverse effects on patient outcomes and high health care costs. Of patients aged 65+ admitted to our hospital in 2019, non-delirious patients had a five-day length of stay (LOS) compared to a 10-14 days LOS in delirious patients. A five days LOS increase adds an additional $ 8,325 per patient for an extra annual cost of 15 million dollars. Additionally, delirium is often not recognized. A prior retrospective study showed that 31% of older adults seen by a Geriatrics provider were diagnosed with delirium, while only 11% were detected by nurse’s CAM screen. Given the need to improve delirium detection and management, a QI project was undertaken with a goal to recruit an interdisciplinary team, create a risk stratification tool to identify patients at substantial risk for developing delirium, and develop a delirium prevention protocol. Patients with a score of ≥ 4 were initiated on a nurse driven delirium protocol that included a delirium precaution sign and caregiver education. 6 months data has shown increased delirium detection of 33%, a reduction in 7.7 days LOS, reduced SNF discharge by 27%, and a significant LOS saving of 231 days. The results were statistically significant, p < 0.04 for LOS reduction. The cost avoidance in LOS alone were $384,615 for delirium patients.


2021 ◽  
Author(s):  
Maria Pantelidou ◽  
Iztok Caglic ◽  
Anne George ◽  
Oleg Blyuss ◽  
Vincent Gnanapragasam ◽  
...  

Abstract PurposeTo investigate the accuracy of surface-based ultrasound-derived PSA-density (US-PSAD) versus gold-standard MRI-PSAD as a risk-stratification tool. MethodsSingle-centre prospective study of patients undergoing MRI for suspected prostate cancer (PCa). Four combinations of US-volumes were calculated using transperineal (TP) and transabdominal (TA) views, with triplanar measurements to calculate volume and US-PSAD. Intra-class correlation coefficient (ICC) was used tocompare US and MRI volumes. Categorical comparison of MRI-PSAD and US-PSAD was performed at PSAD cut-offs <0.15, 0.15-0.20, and >0.20 ng/mL2 to assess agreement with MRI-PSAD risk-stratification decisions.Results64 men were investigated, mean age 69 years and PSA 7.0 ng/mL. 36/64 had biopsy-confirmed prostate cancer (18 Gleason 3+3, 18 Gleason ≥3+4). Mean MRI-derived gland volume was 60 mL, compared to 56 mL for TA-US, and 65 mL TP-US. ICC demonstrated good agreement for all US volumes with MRI, with highest agreement for transabdominal US, followed by combined TA/TP volumes. Risk-stratification decisions to biopsy showed concordant agreement between triplanar MRI-PSAD and ultrasound-PSAD in 86-91% and 92-95% at PSAD thresholds of >0.15 ng/mL2 and >0.12 ng/mL2, respectively. Biopsy-decision making at threshold >0.12 ng/mL2, demonstrated sensitivity ranges of 81-100%, specificity 85-100%, PPV 86-100% and NPV 83-100%. Transabdominal US provided optimal sensitivity of 100% for this clinical decision, with specificity 85%, and transperineal US provided optimal specificity of 100%, with sensitivity 87%.ConclusionTransperineal-US and combined TA-TP US-derived PSA density values compare well with standard MRI-derived values and could be used to provide accurate PSAD at presentation and inform the need for further investigations.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 793
Author(s):  
Diana Malaeb ◽  
Souheil Hallit ◽  
Nada Dia ◽  
Sarah Cherri ◽  
Imad Maatouk ◽  
...  

Background: Non-communicable diseases, the major cause of death and disability, are susceptible to modifiable and non-modifiable risk factors. Atrial fibrillation (AF) increases the risk of stroke by 4 to 5 times and can lead to cardiovascular mortality. This study was conducted to assess the effects of different sociodemographic and socioeconomic factors on stroke development in patients with AF. Methods: A cross-sectional study was conducted between January and June 2018 on patients recruited from Lebanese community pharmacies. The CHA2DS2-VASc scoring system is utilized as a stroke risk stratification tool in AF patients. Participants with a previous physician diagnosis of AF, documented on medical records, were included in this study. Data was collected through a survey that was distributed to all eligible patients. Results: A total of 524 patients were enrolled in the study with a mean age (± SD) of 58.75 ± 13.59 years with hypertension (78.4%) being the most predominant disease. The results showed that obesity (Beta=0.61, p-value =0.011), retirement and unemployment compared to employment (Beta=1.44 and 1.44, p-value=0.001 respectively), divorced/widow compared to married (Beta=1.38, p-value =0.001) were significantly associated with higher CHA2DS2-VASc scores whereas high versus low socio-economic status (Beta=-1.03, p=0.009) and high school education versus primary education level (Beta=-0.49, p-value=0.025) were significantly associated with lower CHA2DS2-VASc scores. Conclusions: The study highlights that the CHA2DS2-VASc score is affected by the presence of various sociodemographic and socioeconomic characteristics in patients with AF. Thus, screening for those factors may predict the progression of cardiovascular disease and may provide an optimal intervention.


EMJ Diabetes ◽  
2021 ◽  
pp. 80-83
Author(s):  
Pablo Millares Martin ◽  
Rosa Bobet Reyes

Background: Heart failure (HF) is underdiagnosed among patients with diabetes. Awareness is required to improve its management and to reduce its impact. Objectives: To suggest a risk assessment tool that could facilitate the early diagnosis of HF and even reduce its incidence by facilitating individualised management plans. Methods: Assess current medical literature, searching for parameters that indicate a higher risk of HF among the diabetic population. Results: Twenty-four parameters were found that could be the potential basis for a risk stratification tool. Conclusion: The concept of a risk stratification tool is presented. Work on validating will be required. It has the potential to affect the future management of patients with diabetes and to reduce the incidence and prevalence of HF in this population.


Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 3434-3434
Author(s):  
Yahan Li ◽  
Xue Sun ◽  
Xin Wang ◽  
Xiaosheng Fang

Abstract Background Numerous studies have confirmed that National Comprehensive Cancer Network (NCCN) risk stratification or pre-transplant minimal residual disease (MRD) levels can predict the risk of recurrence and survival after transplantation. But it is unclear whether combining these two parameters can more accurately predict prognosis. Methods We retrospectively analyzed 85 patients with acute myeloid leukemia (AML) who underwent allogeneic hematopoietic stem cell transplantation (allo-HSCT) and constructed a new risk stratification tool combining NCCN risk stratification and pre-MRD levels. All patients were grouped by NCCN risk stratification (favorable/intermediate prognosis and poor prognosis group), pre-MRD levels (MRD (-) group (&lt;0.1%) and MRD (+) group (≥0.1%)) and a combination of the above (low, intermediate and high risk groups), and graft-versus-host disease (GVHD) and prognosis were compared between groups. Results Relative to the favorable/intermediate prognosis group, OS and RFS were poorer in the poor prognosis group (71% vs 82%, P= .156; 60% vs 74%, P= .101) and CIR (29% vs 20%, P= .229) and NRM (23% vs 14%, P= .200) were better. The incidence of aGVHD and cGVHD was slightly lower in the favorable/intermediate prognosis group than in the poor prognosis group (38% vs 46%, P= .415; 10% vs 11%, P=. 572). Relative to the MRD (+) group, the MRD (-) group had significantly better OS and RFS (89% vs 59%, P= .002; 79% vs 50%, P= .003), lower CIR and NRM (15.1% vs 37.5%, P= .011; 11.3% vs 28%, P= .040), and a lower incidence of cGVHD (6% vs 19%, P= .022). The new risk stratification tool stratified patients into low, intermediate and high risk groups. Patients in the high-risk group had the highest incidence of aGVHD and cGVHD (42% vs 35% vs 53%, P= .606; 6% vs 11% vs 20%, P= .157). The difference in cGVHD between the low- and high-risk groups was significant (P= .038). Three-year OS was 93.9%, 70% and 60% (P= .011) and RFS was 85%, 62% and 46.7% (P= .009) for low-, intermediate- and high-risk patients, respectively. The differences in OS and RFS between the low- and intermediate-risk groups were statistically significant (P= .010; P= .025), as were the differences in OS and RFS between the low- and high-risk groups (P= .001; P= .001). Patients in the high-risk group had the highest CIR and NRM relative to those in the low- and intermediate-risk groups (9% vs 32% vs 33.3%, P= .027; 6% vs 24.3% vs 26.7%; P= .059). The differences in CIR (P= .012) and NRM (P= .028) were statistically significant in both the low-risk and intermediate-risk groups, as well as in the low- and high-risk groups (CIR: P= .028; NRM: P= .021). Multivariate analysis indicated that time to ANC recovery, time from diagnosis to transplantation, and novel risk stratification were independent prognostic factors. Conclusions Both pre-MRD levels and NCCN risk stratification predict AML prognosis after allo-HSCT, and combining the two can more accurately predict post-transplant prognosis. Disclosures No relevant conflicts of interest to declare.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S654-S655
Author(s):  
Rupal K Jaffa ◽  
Minh-Thi Ton ◽  
Jeanne Forrester ◽  
Rupal Patel ◽  
Courtney W Brantley ◽  
...  

Abstract Background Penicillin allergies are commonly reported, yet more than 95% of these patients can tolerate β-lactams. A comprehensive allergy history is essential when determining which patients can safely receive a β-lactam but is rarely obtained. When available, interpretation of the history is often limited by lack of comfort in determining risk of an allergic reaction. Our antimicrobial stewardship and allergy team created a standardized allergy history questionnaire and risk stratification tool. The purpose of this study was to validate this tool by comparing risk levels assigned by various clinicians to that assigned by an allergist. Methods We prospectively identified 50 adult and 50 pediatric patients hospitalized between July 1, 2020 and March 31, 2021 with an allergy to penicillin, amoxicillin, ampicillin, or cephalexin. Patients with severe non-IgE mediated reactions were excluded. All patients (or caregivers) were interviewed by the same pharmacist using the allergy questionnaire. Clinicians from various subspecialties, including an adult and pediatric allergist, an adult and pediatric infectious diseases (ID) physician, an adult and pediatric hospitalist, and an adult and pediatric ID pharmacist, received anonymized completed questionnaires and the risk stratification tool, but were blinded to other clinicians’ responses. The primary endpoint was overall concordance in risk stratification between non-allergists and allergists. Results Overall concordance was 66% (33/50) in adult and 90% (45/50) in pediatric patients (Table 1). Concordance between individual clinicians and the allergist are shown in Figure 1. In adults, anaphylaxis, difficulty breathing, and angioedema were associated with less severe stratification by non-allergists than allergists. No clinicians stratified any pediatric patient into a lower risk category than the allergist. Table 1. Clinician Agreement with Allergist Figure 1. Risk Stratification Severity Compared to Allergist Conclusion Use of a β-lactam allergy risk stratification tool led to agreement with allergist assessment in the majority of patients. Variation in risk assignment was greater in adult patients; however, non-allergist pediatric providers assigned all patients at the same or more severe level as the allergist, indicating safety in this population. Disclosures All Authors: No reported disclosures


Sign in / Sign up

Export Citation Format

Share Document