Time Trends in the Usefulness of Pretest Prediction and D-Dimer Testing for the Diagnosis of Venous Thromboembolism (VTE).

Blood ◽  
2006 ◽  
Vol 108 (11) ◽  
pp. 3304-3304
Author(s):  
Normand Blais ◽  
Jacques Morais ◽  
Nicolas Sauve ◽  
St-Onge Louise ◽  
Nathalie Aucoin

Abstract Objectives: Emergency room (ER) evaluation may differ when physicians see patients in the context of clinical trials compared to routine care. We aimed to determine whether patterns in the use of a systemic pretest probability (PTP) stratification tool and D-dimer testing change over time and whether this influences their clinical utility. Methods: Charts were reviewed of 974 cases that had D-dimer testing for VTE exclusion in two time periods; 474 consecutive patients evaluated between 07/02 and 10/02 and 500 between 07/04 and 10/04. The former cohort was managed by ER physicians that had just participated in a clinical trial on VTE diagnosis including PTP assessment and D-dimer testing, whereas the latter group had received no formal training after 2002. In both cohorts, pretest scoring as low, intermediate or high risk according to Wells criteria for DVT and PE was performed in every case since this was requested from the laboratory. D-dimer testing (Vidas® D-Dimer, Biomerieux) was performed only in the low and moderate risk group after reception of the PTP assessment form. Physicians were also asked to check out a form detailing individual Wells criteria leading to the overall assessment but were not mandated to do so in order to obtain the D-dimer result from the laboratory. Results: Pretest probability was evaluated as low, moderate or high in 66,7%, 31,9% and 1,3% vs. 78,4%, 21,2% and 0% (all comparisons are 2002 vs. 2004). There was a significant increase in proportion of undetailed evaluations of PTP (32,7% vs. 61%). Detailed forms in both cohorts had similar risk distribution (low/moderate risk 51,7/46,4% vs. 59,2/40,8%) whereas undetailed forms were usually quoted as low risk (97,4% vs. 90.5%). Number of D-dimer tests performed per month was stable over the three year period of observation and in the two evaluated cohorts (119/mth vs. 125/mth). D-dimer results were negative in 299/474 cases (63,1%) in 2002 and 359/500 (71,8%) in 2004 (p=0,003), although this difference was less apparent when analysed according to the individual PTP risk groups (low/moderate risk 77,3/51,9% vs. 71,2/48,3%). Incidence of VTE events decreased over time from 5,3 to 1,6% (p=0,002). Incidence in the low/moderate risk groups was 1,6%/10,6% vs. 0,3%/6,6%. Only one false negative result (popliteal vein DVT) was observed in the two cohorts (NPP = 99,9%). Conclusion: Our results show a decreasing incidence in VTE over time in an ER population screened by D-dimer testing and PTP even though the number of tests performed were stable over time. This was accompanied by a decreasing number of cases considered to have an intermediate PTP. These findings suggest a change of practice over time resulting in an increasing use of D-dimer testing for very low risk patients and a decrease in their use for intermediate risk patients. A decrease in the proper use of the PTP tool over time might result in overestimation of the physician perceived risk and therefore lead to an increase in imaging resource utilisation. Broader studies including imaging prescription trends over time will be needed to confirm this hypothesis.

2020 ◽  
Vol 154 (Supplement_1) ◽  
pp. S5-S5
Author(s):  
Ridin Balakrishnan ◽  
Daniel Casa ◽  
Morayma Reyes Gil

Abstract The diagnostic approach for ruling out suspected acute pulmonary embolism (PE) in the ED setting includes several tests: ultrasound, plasma d-dimer assays, ventilation-perfusion scans and computed tomography pulmonary angiography (CTPA). Importantly, a pretest probability scoring algorithm is highly recommended to triage high risk cases while also preventing unnecessary testing and harm to low/moderate risk patients. The d-dimer assay (both ELISA and immunoturbidometric) has been shown to be extremely sensitive to rule out PE in conjunction with clinical probability. In particularly, d-dimer testing is recommended for low/moderate risk patients, in whom a negative d-dimer essentially rules out PE sparing these patients from CTPA radiation exposure, longer hospital stay and anticoagulation. However, an unspecific increase in fibrin-degradation related products has been seen with increase in age, resulting in higher false positive rate in the older population. This study analyzed patient visits to the ED of a large academic institution for five years and looked at the relationship between d-dimer values, age and CTPA results to better understand the value of age-adjusted d-dimer cut-offs in ruling out PE in the older population. A total of 7660 ED visits had a CTPA done to rule out PE; out of which 1875 cases had a d-dimer done in conjunction with the CT and 5875 had only CTPA done. Out of the 1875 cases, 1591 had positive d-dimer results (>0.50 µg/ml (FEU)), of which 910 (57%) were from patients older than or equal to fifty years of age. In these older patients, 779 (86%) had a negative CT result. The following were the statistical measures of the d-dimer test before adjusting for age: sensitivity (98%), specificity (12%); negative predictive value (98%) and false positive rate (88%). After adjusting for age in people older than 50 years (d-dimer cut off = age/100), 138 patients eventually turned out to be d-dimer negative and every case but four had a CT result that was also negative for a PE. The four cases included two non-diagnostic results and two with subacute/chronic/subsegmental PE on imaging. None of these four patients were prescribed anticoagulation. The statistical measures of the d-dimer test after adjusting for age showed: sensitivity (96%), specificity (20%); negative predictive value (98%) and a decrease in the false positive rate (80%). Therefore, imaging could have been potentially avoided in 138/779 (18%) of the patients who were part of this older population and had eventual negative or not clinically significant findings on CTPA if age-adjusted d-dimers were used. This data very strongly advocates for the clinical usefulness of an age-adjusted cut-off of d-dimer to rule out PE.


Circulation ◽  
2008 ◽  
Vol 118 (suppl_18) ◽  
Author(s):  
Brian J O’Neil ◽  
Patrick Medado ◽  
James Wegner ◽  
Michael Gallagher ◽  
Gil Raff

Coronary Computed Tomography Angiography (CCTA) is a diagnostic test shown to have a high negative predictive value for coronary artery disease and major cardiac events at three months. Many emergency department, (ED), protocols mandate serial enzymes prior to discharge to capture possible myocardial injury even with normal CCTAs. Determine the value of serial cardiac enzymes in low risk patients (TIMI Risk Score <3) presenting to the ED with chest pain who have a normal or intermediate CTA, (< 25% stenosis). 307 patients received CCTA as part of clinical trials. All patients presented to the ED with suspected acute coronary syndrome and had a TIMI Risk <3, non diagnostic ECGs and negative initial enzymes. All patients had serial enzymes sampling at least 4 hours apart per protocol. Structured telephone follow up and chart review was completed at 30 and 90 days post ED visit. All data was analyzed using descriptive statistics. 68% of all CTA patients were classified as normal < 25% stenosis, with another 8% considered intermediate < 50% stenosis. Average time between blood collection was 4:30+/−1:28. All patients had normal serial Troponin I values (Normal <0.05 ng/mL) with no appreciable delta over time. 39 patients had an increase in Myoglobin values with 10 having a delta ≥20% but <50%, none of these patients had a level above our normal range (< 98 ng/ml). Only one myoglobin values was above our normal limits, but did change over time. There were no deaths or adverse cardiac events at 90 days in this population. In this study of low risk patients with suspected cardiac ischemia (TIMI <3) and a normal or intermediate CCTA, serial cardiac enzymes have no utility.


BMJ Open ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. e040151
Author(s):  
Christine Baumgartner ◽  
Frederikus A Klok ◽  
Marc Carrier ◽  
Andreas Limacher ◽  
Jeanne Moor ◽  
...  

IntroductionThe clinical significance of subsegmental pulmonary embolism (SSPE) is currently unclear. Although growing evidence from observational studies suggests that withholding anticoagulant treatment may be a safe option in selected patients with isolated SSPE, most patients with this condition receive anticoagulant treatment, which is associated with a 90-day risk of recurrent venous thromboembolism (VTE) of 0.8% and major bleeding of up to 5%. Given the ongoing controversy concerning the risk-benefit ratio of anticoagulation for isolated SSPE and the lack of evidence from randomised-controlled studies, the aim of this clinical trial is to evaluate the efficacy and safety of clinical surveillance without anticoagulation in low-risk patients with isolated SSPE.Methods and analysisSAFE-SSPE (Surveillance vs. Anticoagulation For low-risk patiEnts with isolated SubSegmental Pulmonary Embolism, a multicentre randomised placebo-controlled non-inferiority trial) is an international, multicentre, placebo-controlled, double-blind, parallel-group non-inferiority trial conducted in Switzerland, the Netherlands and Canada. Low-risk patients with isolated SSPE are randomised to receive clinical surveillance with either placebo (no anticoagulation) or anticoagulant treatment with rivaroxaban. All patients undergo bilateral whole-leg compression ultrasonography to exclude concomitant deep vein thrombosis before enrolment. Patients are followed for 90 days. The primary outcome is symptomatic recurrent VTE (efficacy). The secondary outcomes include clinically significant bleeding and all-cause mortality (safety). The ancillary outcomes are health-related quality of life, functional status and medical resource utilisation.Ethics and disseminationThe local ethics committees in Switzerland have approved this protocol. Submission to the Ethical Committees in the Netherlands and Canada is underway. The results of this trial will be published in a peer-reviewed journal.Trial registration numberNCT04263038.


1968 ◽  
Vol 5 (3) ◽  
pp. 307-310 ◽  
Author(s):  
Jagdish N. Sheth ◽  
M. Venkatesan

This experimental study of consumer decision making over time explored risk-reduction processes of information seeking, prepurchase deliberation, and brand loyalty. Perceived risk was manipulated by creating low-risk and high-risk groups. The task was to choose among brands of hair spray. Results showed that information seeking and prepurchase deliberation declined over time and brand loyalty increased over time.


2011 ◽  
Vol 93 (5) ◽  
pp. 370-374
Author(s):  
D Veeramootoo ◽  
L Harrower ◽  
R Saunders ◽  
D Robinson ◽  
WB Campbell

INTRODUCTION Venous thromboembolism (VTE) prophylaxis has become a major issue for surgeons both in the UK and worldwide. Sev-eral different sources of guidance on VTE prophylaxis are available but these differ in design and detail. METHODS Two similar audits were performed, one year apart, on the VTE prophylaxis prescribed for all general surgical inpatients during a single week (90 patients and 101 patients). Classification of patients into different risk groups and compliance in prescribing prophylaxis were examined using different international, national and local guidelines. RESULTS There were significant differences between the numbers of patients in high, moderate and low-risk groups according to the different guidelines. When groups were combined to indicate simply ‘at risk’ or ‘not at risk’ (in the manner of one of the guidelines), then differences were not significant. Our compliance improved from the first audit to the second. Patients at high risk received VTE prophylaxis according to guidance more consistently than those at low risk. CONCLUSIONS Differences in guidance on VTE prophylaxis can affect compliance significantly when auditing practice, depending on the choice of ‘gold standard’. National guidance does not remove the need for clear and detailed local policies. Making decisions about policies for lower-risk patients can be more difficult than for those at high risk.


2020 ◽  
Vol 38 (33) ◽  
pp. 3851-3862 ◽  
Author(s):  
Matthew J. Ehrhardt ◽  
Zachary J. Ward ◽  
Qi Liu ◽  
Aeysha Chaudhry ◽  
Anju Nohria ◽  
...  

PURPOSE Survivors of childhood cancer treated with anthracyclines and/or chest-directed radiation are at increased risk for heart failure (HF). The International Late Effects of Childhood Cancer Guideline Harmonization Group (IGHG) recommends risk-based screening echocardiograms, but evidence supporting its frequency and cost-effectiveness is limited. PATIENTS AND METHODS Using the Childhood Cancer Survivor Study and St Jude Lifetime Cohort, we developed a microsimulation model of the clinical course of HF. We estimated long-term health outcomes and economic impact of screening according to IGHG-defined risk groups (low [doxorubicin-equivalent anthracycline dose of 1-99 mg/m2 and/or radiotherapy < 15 Gy], moderate [100 to < 250 mg/m2 or 15 to < 35 Gy], or high [≥ 250 mg/m2 or ≥ 35 Gy or both ≥ 100 mg/m2 and ≥ 15 Gy]). We compared 1-, 2-, 5-, and 10-year interval-based screening with no screening. Screening performance and treatment effectiveness were estimated based on published studies. Costs and quality-of-life weights were based on national averages and published reports. Outcomes included lifetime HF risk, quality-adjusted life-years (QALYs), lifetime costs, and incremental cost-effectiveness ratios (ICERs). Strategies with ICERs < $100,000 per QALY gained were considered cost-effective. RESULTS Among the IGHG risk groups, cumulative lifetime risks of HF without screening were 36.7% (high risk), 24.7% (moderate risk), and 16.9% (low risk). Routine screening reduced this risk by 4% to 11%, depending on frequency. Screening every 2, 5, and 10 years was cost-effective for high-risk survivors, and every 5 and 10 years for moderate-risk survivors. In contrast, ICERs were > $175,000 per QALY gained for all strategies for low-risk survivors, representing approximately 40% of those for whom screening is currently recommended. CONCLUSION Our findings suggest that refinement of recommended screening strategies for IGHG high- and low-risk survivors is needed, including careful reconsideration of discontinuing asymptomatic left ventricular dysfunction and HF screening in low-risk survivors.


Author(s):  
Cheng-Hsi Yeh ◽  
Shao-Chun Wu ◽  
Sheng-En Chou ◽  
Wei-Ti Su ◽  
Ching-Hua Tsai ◽  
...  

Background: Identification of malnutrition is especially important in severely injured patients, in whom hypermetabolism and protein catabolism following traumatic injury worsen their nutritional condition. The geriatric nutritional risk index (GNRI), based on serum albumin level and the current body weight/ideal body weight ratio, is useful for identifying patients with malnutrition in many clinical conditions. This study aimed to explore the association between admission GNRI and mortality outcomes of adult patients with polytrauma. Methods: From 1 January 2009 to 31 December 2019, a total of 348 adult patients with polytrauma, registered in the trauma database of a level I trauma center, were recognized and categorized into groups of death (n = 71) or survival (n = 277) and into four nutritional risk groups: a high-risk group (GNRI < 82, n = 87), a moderate-risk group (GNRI 82 to <92, n = 144), a low-risk group (GNRI 92–98, n = 59), and a no-risk group (GNRI > 98, n = 58). Univariate and multivariate logistic regression analyses were used to identify the independent risk factors for mortality. The mortality outcomes of patients at various nutritional risks were compared to those of patients in the no-risk group. Results: The comparison between the death group (n = 71) and the survival group (n = 277) revealed that there was no significant difference in gender predominance, age, pre-existing comorbidities, injury mechanism, systolic blood pressure, and respiratory rate upon arrival at the emergency room. A significantly lower GNRI and Glasgow Coma Scale score but higher injury severity score (ISS) was observed in the death group than in the survival group. Multivariate logistic regression analysis revealed that Glasgow Coma Scale (GCS), odds ratio (OR), 0.88; 95% confidence interval (CI), 0.83–0.95; p < 0.001), ISS (OR, 1.07; 95% CI, 1.04–1.11; p < 0.001), and GNRI (OR, 0.94; 95% CI, 0.91–0.97; p < 0.001) were significant independent risk factors for mortality in these patients. The mortality rates for the high-risk, moderate-risk, low-risk, and no-risk groups were 34.5%, 20.1%, 8.5%, and 12.1%, respectively. Unlike patients in the moderate-risk and low-risk groups, patients in the high-risk group had a significantly higher death rate than that of those in the no-risk group. Conclusions: This study revealed that the GNRI may serve as a simple, promising screening tool to identify the high risk of malnutrition for mortality in adult patients with polytrauma.


2020 ◽  
Vol 35 (6) ◽  
pp. 872-872
Author(s):  
Hageboutros K ◽  
Bono A ◽  
Johnson-Markve B ◽  
Smith K ◽  
Lee G

Abstract Objective Mathematical models predicting risk of verbal memory decline after resective epilepsy surgery have been developed for patients undergoing temporal lobectomies. This study was undertaken to determine if application of the Stroup memory loss prediction model was as accurate in foreseeing verbal memory decline after temporal lobectomy as in the less invasive selective amygdalohippocampectomy procedure. Method This retrospective study examined the verbal memory performances of 40 left temporal lobectomy (ATL), and 16 left subtemporal approach selective amygdalohippocampectomy (SA-H), patients before and after epilepsy surgery using word list learning (Rey Auditory-Verbal Learning Test, Buschke Selective Reminding Test) and story memory (WMS Logical Memory) tests. Patients were assigned to one of four groups using the Stroup multiple regression equation: Minimal Risk (61% risk). To classify memory decline in individual patients, a pre-to-post surgery decrease of &gt; 1 SD on at least one memory test constituted memory decline. Results The prediction model accurately classified 82% (9/11) of ATL, and 75% (3/4) of SA-H, High Risk patients. Verbal memory loss was higher among ATLs than SA-Hs in the Moderate Risk (87% vs. 18%) and Low Risk (71% vs. 0%) groups. Conclusion The Stroup verbal memory loss risk model under-predicted memory loss among temporal lobectomy patients (71% of Low Risk patients showed memory decline) and over-predicted memory loss among selective amygdalohippocampectomy patients (only 18% of Moderate Risk patients showed memory decline). Results should be considered preliminary due to methodological limitation including small Ns and unequal sample sizes.


CJEM ◽  
2017 ◽  
Vol 19 (S1) ◽  
pp. S29
Author(s):  
P. Reardon ◽  
S. Patrick ◽  
M. Taljaard ◽  
K. Thavorn ◽  
M.A. Mukarram ◽  
...  

Introduction: It is well established that a negative D-dimer will reliably rule out thromboembolism in selected low risk patients. Multiple modified D-dimer cutoffs have been suggested for older patients to improve diagnostic specificity. However, these approaches are better established for pulmonary embolism than for deep venous thrombosis (DVT). This study will evaluate the diagnostic performance of previously suggested D-dimer cutoffs for low risk DVT patients in the ED, and assess for a novel cutoff with improved performance. Methods: This health records review included patients &gt;50 years with suspected DVT who were low-risk and had a D-dimer performed. Our analysis evaluated the diagnostic accuracy of D-dimer cutoffs of 500 and the age adjusted (age x 10) rule for patients &gt;50 years; and 750, and 1,000 cutoffs for patients &gt;60 years. 30-day outcome was a diagnosis of DVT. We also assessed the diagnostic accuracy for a novel cutoff (age x 12.5). Results: 1,000 patients (mean age 68 years; 59% female) were included. Of these, 110 patients (11%) were diagnosed with DVT. The conventional cutoff of &lt;500 µg/L demonstrated a sensitivity of 99.1% (95% CI 95.0-99.9) and a specificity of 36.4% (95% CI 33.2-39.7). For patients &gt;60 years, the absolute cutoffs of 750 and 1,000 showed sensitivity of 98.7% (95% CI, 92.9, 99.9), and the specificity increased to 48.6% (95% CI, 44.5-52.8%) and 62.1% (95% CI, 58.1-66.1%) respectively. For all study patients, age adjusted D-dimer demonstrated a sensitivity of 99.1% (95% CI 95.0-99.9) and a specificity of 51.2% (95% CI, 47.9-54.6). A novel age adjusted cutoff (age x 12.5) for patients &gt;50, demonstrated a sensitivity of 97.3% (95% CI 92.2-99.4) and a specificity of 61.2% (95% CI 58.0-64.5). When compared to conventional cutoff, the age adjusted cutoffs (age x 10 and age x 12.5) would have resulted in an absolute decrease in further investigations of 13.1% and 22.2%, respectively, with false negative rates of 0.1% and 0.3%. Conclusion: Among older patients with suspected DVT and low clinical probability, the age adjusted D-dimer increases the proportion of patients among whom DVT can be ruled out. A novel cutoff (age x 12.5) demonstrated improved specificity. Future large scale prospective studies are needed to confirm this finding and to explore the cost savings of these approaches.


Sign in / Sign up

Export Citation Format

Share Document