Comparison of manual and computer assigned injury severity scores

2019 ◽  
Vol 26 (4) ◽  
pp. 330-333
Author(s):  
Lauren Otto ◽  
Angela Wang ◽  
Krista Wheeler ◽  
Junxin Shi ◽  
Jonathan I Groner ◽  
...  

BackgroundThe study objective was to compare the ISS manually assigned by hospital personnel and those generated by the ICDPIC software for value agreement and predictive power of length of stay (LOS) and mortality.MethodsWe used data from the 2010–2016 trauma registry of a paediatric trauma centre (PTC) and 2014 National Trauma Data Bank (NTDB) hospitals that reported manually coded ISS. Agreement analysis was performed between manually and computer assigned ISS with severity groupings of 1–8, 9–15, 16–25 and 25–75. The prediction of LOS was compared using coefficients of determination (R2) from linear regression models. Mortality predictive power was compared using receiver operating characteristic (ROC) curves from logistic regression models.ResultsThe proportion of agreement between manually and computer assigned ISS in PTC data was 0.84 and for NTDB was 0.75. Analysing predictive power for LOS in the PTC sample, the R2=0.19 for manually assigned scores, and the R2=0.15 for computer assigned scores (p=0.0009). The areas under the ROC curve indicated a mortality predictive power of 0.95 for manually assigned scores and 0.86 for computer assigned scores in the PTC data (p=0.0011).ConclusionsManually and computer assigned ISS had strong comparative agreement for minor injuries but did not correlate well for critical injuries (ISS=25–75). The LOS and mortality predictive power were significantly higher for manually assigned ISS when compared with computer assigned ISS in both PTC and NTDB data sets. Thus, hospitals should be cautious about transitioning to computer assigned ISS, specifically for patients who are critically injured.

2021 ◽  
pp. 095679762097165
Author(s):  
Matthew T. McBee ◽  
Rebecca J. Brand ◽  
Wallace E. Dixon

In 2004, Christakis and colleagues published an article in which they claimed that early childhood television exposure causes later attention problems, a claim that continues to be frequently promoted by the popular media. Using the same National Longitudinal Survey of Youth 1979 data set ( N = 2,108), we conducted two multiverse analyses to examine whether the finding reported by Christakis and colleagues was robust to different analytic choices. We evaluated 848 models, including logistic regression models, linear regression models, and two forms of propensity-score analysis. If the claim were true, we would expect most of the justifiable analyses to produce significant results in the predicted direction. However, only 166 models (19.6%) yielded a statistically significant relationship, and most of these employed questionable analytic choices. We concluded that these data do not provide compelling evidence of a harmful effect of TV exposure on attention.


Author(s):  
Ugo Indraccolo ◽  
Gennaro Scutiero ◽  
Pantaleo Greco

Objective Analyzing if the sonographic evaluation of the cervix (cervical shortening) is a prognostic marker for vaginal delivery. Methods Women who underwent labor induction by using dinoprostone were enrolled. Before the induction and three hours after it, the cervical length was measured by ultrasonography to obtain the cervical shortening. The cervical shortening was introduced in logistic regression models among independent variables and for calculating receiver operating characteristic (ROC) curves. Results Each centimeter in the cervical shortening increases the odds of vaginal delivery in 24.4% within 6 hours; in 16.1% within 24 hours; and in 10.5% within 48 hours. The best predictions for vaginal delivery are achieved for births within 6 and 24 hours, while the cervical shortening poorly predicts vaginal delivery within 48 hours. Conclusion The greater the cervical shortening 3 hours after labor induction, the higher the likelihood of vaginal delivery within 6, 24 and 48 hours.


2021 ◽  
Author(s):  
Elizabeth Purssell ◽  
Sean Patrick ◽  
Joseph Haegert ◽  
Vesna Ivkov ◽  
John Taylor

Abstract Introduction Resuscitative endovascular balloon occlusion of the aorta (REBOA) is a less invasive alternative to resuscitative thoracotomy (RT) for life threatening, infra-diaphragmatic, non-compressible hemorrhage from trauma. Existing evidence surrounding the efficacy of REBOA is conflicting; nevertheless, expert consensus suggests that REBOA should be considered in select trauma patients. There has been a paucity of studies that evaluate the potential utility of REBOA in the Canadian setting. The study objective was to evaluate the percentage of trauma patients presenting to a Level 1 Canadian trauma centre that would have met criteria for REBOA. Methods We conducted a retrospective chart review of patients recorded in the British Columbia Trauma Registry who warranted a trauma team activation (TTA) at our institution. We identified REBOA candidates using pre-defined criteria based on published guidelines. Each TTA case was screened by a reviewer, and then each Potential Candidate was reviewed by a panel of trauma physicians for determination of final candidacy. Results Fourteen patients were classified as Likely REBOA Candidates (2.2% of TTAs, median age 46.1 years, 64.3% female). These patients had a median Injury Severity Score of 31.5 (IQR 26.8). The main sources of hemorrhage in these patients were from abdominal injuries (71.4%) and pelvic fractures (42.9%). Conclusion The percentage of patients who met criteria for REBOA is similar to that of RTs performed at our Canadian institution. While REBOA would be performed infrequently, it is a less-invasive alternative to RT, which could be a potentially life-saving procedure in a small group of the most severely injured trauma patients.


2014 ◽  
Vol 32 (4_suppl) ◽  
pp. 294-294
Author(s):  
Matthew Mossanen ◽  
Josh Calvert ◽  
Sarah Holt ◽  
Andrew Callaway James ◽  
Jonathan L. Wright ◽  
...  

294 Background: Providers exhibit variation in the selection of the class, dose, and duration of prescribed antibiotic prophylaxis (ABP) to prevent postsurgical infections. We sought to evaluate ABP practice patterns for common inpatient urologic oncology surgeries and ascertain the association between extended ABP and hospital-acquired Clostridium difficile (C. diff) infections. Methods: From the PREMIER database for 2007–2012, we identified patients who underwent radical prostatectomy (RP), radical or partial nephrectomy (Nephx), or radical cystectomy (RC). We defined extended ABP from charges for antibiotics ≥ 2 days after surgery; exclusive of patients with a switch in antibiotic class within 2 postoperative days for presumption of infection. We identified postoperative C. diff infections using ICD-9 diagnosis codes. Hierarchical linear regression models were constructed by procedure to identify patient and provider factors associated with extended ABP. Logistic regression models evaluated the association between extended ABP and postoperative C. diff infection, adjusting for patient and provider characteristics. Results: We identified 59,184 RP patients, 27,921 Nephx patients, and 5,425 RC patients. RC patients were more likely to receive extended ABP (56%) than RP (18%) or Nephx (29%) patients (p<0.001). Other factors associated with extended ABP included prolonged postoperative length of stay (OR ≥ 1.69, p<0.001 for all procedures), and surgical volume (p<0.001 for highest vs. lowest volume quartiles). Hospital identity explained 35% of the variability in ABP after RP, 23% after Nephx, and 20% after RC. Among Nephx and RC patients, extended ABP was associated with significantly higher odds of postoperative C. diff infection (OR 3.79, 95% CI 2.46–5.84, and OR 1.64, 95% CI 1.12–2.39, respectively). Conclusions: We identified marked hospital-level variability in extended ABP following RP, Nephx, and RC, which was associated with significantly increased odds of hospital-acquired C. diff infections. Efforts to increase provider compliance with national ABP guidelines may decrease preventable hospital-acquired infections after urologic cancer surgery.


2003 ◽  
Vol 9 (5) ◽  
pp. 461-466 ◽  
Author(s):  
Ruth Ann Marrie ◽  
Olympia Hadjimichael ◽  
Timothy Vollmer

Objective: To determine the frequency of alternative medicine use among multiple sclerosis (MS) patients, and the factors which predict such use. Methods: We examined 20778 MS patients enrolled in the North A merican Research C onsortium on Multiple Sclerosis (NARC O MS) Patient Registry, residing in the USA. We used demographic and clinical data to create multivariate logistic regression models for i) lifetime use of any alternative medicine, ii) lifetime use of any alternative provider (A P), and iii) lifetime use of each of the three most common A P. Results: 20387 patients provided data regarding alternative medicine use. Lifetime use of any alternative medicine was 54% and current use was 30%. C hiropractors (51%), massage therapists (34%), and nutritionists (24%) were the most commonly used A P. In all five models, use of alternative medicine was most strongly predicted by use of a conventional provider, and more modestly by disease factors indicating more severe or prolonged disease. Predictive power of the models was poor (c-index =0.62-0.68), despite good fits for the data. Conclusions: Demographic factors play only a minimal role in predicting the use of alternative medicine in this MS population while disease factors play a slightly stronger role. There must be other factors involved that may include accessibility, social acceptability and cultural factors. G iven the frequency of alternative medicine use by this patient population, further characterization of these factors is important.


Author(s):  
Jean-Jacques Parienti ◽  
Anna L Fournier ◽  
Laurent Cotte ◽  
Marie-Paule Schneider ◽  
Manuel Etienne ◽  
...  

Abstract Background For many people living with HIV (PLWH), taking antiretroviral therapy (ARV) every day is difficult. Methods Average adherence (Av-Adh) and log-transformed treatment interruption (TI) to ARV were prospectively measured over 6 months using electronic drug monitoring (EDM) in several cohorts of PLWH. Multivariate linear regression models including baseline confounders explored the influence of EDM-defined adherence (R 2) on 6-month Log10 HIV-RNA. Multivariate logistic regression models were used to compare the risk of HIV-RNA detection within subgroups stratified by lower (≤95%) and higher (&gt;95%) Av-Adh. Results Three hundred ninety nine PLWH were analyzed with different ARV: dolutegravir (n=102), raltegravir (n=90), boosted PI (bPI; n=107), and NNRTI (n=100). In the dolutegravir group, the influence of adherence pattern measures on R 2 for HIV-RNA levels was marginal (+2%). Av-Adh, TI and Av-Adh x TI increased the R 2 for HIV-RNA levels by 54% and 40% in the raltegravir and bPI treatment groups, respectively. TI increased the R 2 for HIV-RNA levels by 36% in the NNRTI treatment group. Compared to dolutegravir-based regimen, the risk of VR was significantly increased for: raltegravir (adjusted OR (aOR), 45.6; 95% confidence interval (CI) [4.5 - 462.1], p=0.001); NNRTIs (aOR, 24.8; 95% CI [2.7 - 228.4], p=0.005) and bPIs (aOR, 28.3; 95%CI [3.4 - 239.4], p=0.002) in PLWH with Av-Adh ≤95%. Among PLWH with &gt;95% Av-Adh, there were no significant differences on the risk of VR among the different ARV. Conclusion These findings support the concept that dolutegravir in combination with two other active ARVs achieves a greater virological suppression than older ARV, including raltegravir, NNRTI and bPI among PLWH with lower adherence.


2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Jeannie Haggerty ◽  
Jean-Frederic Levesque ◽  
Mark Harris ◽  
Catherine Scott ◽  
Simone Dahrouge ◽  
...  

Abstract Background Primary healthcare services must respond to the healthcare-seeking needs of persons with a wide range of personal and social characteristics. In this study, examined whether socially vulnerable persons exhibit lower abilities to access healthcare. First, we examined how personal and social characteristics are associated with the abilities to access healthcare described in the patient-centered accessibility framework and with the likelihood of reporting problematic access. We then examined whether higher abilities to access healthcare are protective against problematic access. Finally, we explored whether social vulnerabilities predict problematic access after accounting for abilities to access healthcare. Methods This is an exploratory analysis of pooled data collected in the Innovative Models Promoting Access-To-Care Transformation (IMPACT) study, a Canadian-Australian research program that aimed to improve access to primary healthcare for vulnerable populations. This specific analysis is based on 284 participants in four study regions who completed a baseline access survey. Hierarchical linear regression models were used to explore the effects of personal or social characteristics on the abilities to access care; logistic regression models, to determine the increased or decreased likelihood of problematic access. Results The likelihood of problematic access varies by personal and social characteristics. Those reporting at least two social vulnerabilities are more likely to experience all indicators of problematic access except hospitalizations. Perceived financial status and accumulated vulnerabilities were also associated with lower abilities to access care. Higher scores on abilities to access healthcare are protective against most indicators of problematic access except hospitalizations. Logistic regression models showed that ability to access is more predictive of problematic access than social vulnerability. Conclusions We showed that those at higher risk of social vulnerability are more likely to report problematic access and also have low scores on ability to seek, reach, pay, and engage with healthcare. Equity-oriented healthcare interventions should pay particular attention to enhancing people’s abilities to access care in addition to modifying organizational processes and structures that reinforce social systems of discrimination or exclusion.


2020 ◽  
Vol 4 (3-4) ◽  
pp. 89-102
Author(s):  
Paolo Campana ◽  
Andrea Giovannetti

Abstract Purpose We explore how we can best predict violent attacks with injury using a limited set of information on (a) previous violence, (b) previous knife and weapon carrying, and (c) violence-related behaviour of known associates, without analysing any demographic characteristics. Data Our initial data set consists of 63,022 individuals involved in 375,599 events that police recorded in Merseyside (UK) from 1 January 2015 to 18 October 2018. Methods We split our data into two periods: T1 (initial 2 years) and T2 (the remaining period). We predict “violence with injury” at time T2 as defined by Merseyside Police using the following individual-level predictors at time T1: violence with injury; involvement in a knife incident and involvement in a weapon incident. Furthermore, we relied on social network analysis to reconstruct the network of associates at time T1 (co-offending network) for those individuals who have committed violence at T2, and built three additional network-based predictors (associates’ violence; associates’ knife incident; associates’ weapon incident). Finally, we tackled the issue of predicting violence (a) through a series of robust logistic regression models using a bootstrapping method and (b) through a specificity/sensitivity analysis. Findings We found that 7720 individuals committed violence with injury at T2. Of those, 2004 were also present at T1 (27.7%) and co-offended with a total of 7202 individuals. Regression models suggest that previous violence at time T1 is the strongest predictor of future violence (with an increase in odds never smaller than 123%), knife incidents and weapon incidents at the individual level have some predictive power (but only when no information on previous violence is considered), and the behaviour of one’s associates matters. Prior association with a violent individual and prior association with a knife-flagged individual were the two strongest network predictors, with a slightly stronger effect for knife flags. The best performing regressors are (a) individual past violence (36% of future violence cases correctly identified); (b) associates’ past violence (25%); and (c) associates’ knife involvement (14%). All regressors are characterised by a very high level of specificity in predicting who will not commit violence (80% or more). Conclusions Network-based indicators add to the explanation of future violence, especially prior association with a knife-flagged individual and association with a violent individual. Information about the knife involvement of associates appears to be more informative than a subject’s own prior knife involvement.


2019 ◽  
Vol 85 (1) ◽  
pp. 15-22
Author(s):  
Michael R. Nahouraii ◽  
Colleen H. Karvetski ◽  
Rita A. Brintzenhoff ◽  
Gaurav Sachdev ◽  
Susan L. Evans ◽  
...  

Multiprofessional rounds (MPR) represent a mechanism for the coordination of care in critically ill patients. Herein, we examined the impact of MPR on ventilator days (Vent-day), ICU length of stay (LOS), hospital LOS (HLOS), and mortality. A team developed guidelines for MPR, which began in February 2016. Patients admitted between November 2015 and March 2017 with Acute Physiology and Chronic Health Evaluation (APACHE) IV and injury severity scores were included. Outcome data consisted of Vent-day, Vent-day observed/expected ratio (O/E), ICU LOS, ICU LOS O/E, HLOS, HLOS-O/E, and mortality. Linear regression models are constructed to assess statistical significance. A total of 3372 patients were included. Among surgical patients (n = 343 pre-MPR, n = 1675 post-MPR), MPR was associated with decreases in Vent-day O/E (0.74 pre, 0.59 post, P = 0.03), ICU LOS O/E (0.67 pre, 0.61 post, P = 0.01), and HLOS-O/E (1.47 pre, 1.22 post, P = 0.0005). No mortality difference was observed. For trauma patients (n = 221 pre, n = 1133 post), MPR resulted in a reduction in Vent-days (2.2 days pre, 1.6 days post, P = 0.05). However, no differences were observed for Vent-day O/E, ICU LOS O/E, HLOS-O/E, and mortality. Implementation of MPR was associated with improved outcomes for surgical trauma ICU patients. Sustainability of MPR remains a challenge and requires education and engagement.


Sign in / Sign up

Export Citation Format

Share Document