scholarly journals 686: Socioeconomic and racial disparities in intensive care level utilization in a national sample

2020 ◽  
Vol 222 (1) ◽  
pp. S434
Author(s):  
Timothy Wen ◽  
Sbaa K. Syeda ◽  
Adina Kern-Goldberger ◽  
Cynthia Gyamfi-Bannerman ◽  
Mary E. D'Alton ◽  
...  
2015 ◽  
Vol 21 (9) ◽  
pp. 986-992 ◽  
Author(s):  
Sophie Bersoux ◽  
Curtiss B. Cook ◽  
Gail L. Kongable ◽  
Jianfen Shu

2020 ◽  
Author(s):  
Angier Allen ◽  
Samson Mataraso ◽  
Anna Siefkas ◽  
Hoyt Burdick ◽  
Gregory Braden ◽  
...  

BACKGROUND Racial disparities in health care are well documented in the United States. As machine learning methods become more common in health care settings, it is important to ensure that these methods do not contribute to racial disparities through biased predictions or differential accuracy across racial groups. OBJECTIVE The goal of the research was to assess a machine learning algorithm intentionally developed to minimize bias in in-hospital mortality predictions between white and nonwhite patient groups. METHODS Bias was minimized through preprocessing of algorithm training data. We performed a retrospective analysis of electronic health record data from patients admitted to the intensive care unit (ICU) at a large academic health center between 2001 and 2012, drawing data from the Medical Information Mart for Intensive Care–III database. Patients were included if they had at least 10 hours of available measurements after ICU admission, had at least one of every measurement used for model prediction, and had recorded race/ethnicity data. Bias was assessed through the equal opportunity difference. Model performance in terms of bias and accuracy was compared with the Modified Early Warning Score (MEWS), the Simplified Acute Physiology Score II (SAPS II), and the Acute Physiologic Assessment and Chronic Health Evaluation (APACHE). RESULTS The machine learning algorithm was found to be more accurate than all comparators, with a higher sensitivity, specificity, and area under the receiver operating characteristic. The machine learning algorithm was found to be unbiased (equal opportunity difference 0.016, <i>P</i>=.20). APACHE was also found to be unbiased (equal opportunity difference 0.019, <i>P</i>=.11), while SAPS II and MEWS were found to have significant bias (equal opportunity difference 0.038, <i>P</i>=.006 and equal opportunity difference 0.074, <i>P</i><.001, respectively). CONCLUSIONS This study indicates there may be significant racial bias in commonly used severity scoring systems and that machine learning algorithms may reduce bias while improving on the accuracy of these methods.


2021 ◽  
Author(s):  
Shoshana Jarvis ◽  
Zoe Elina Ferguson ◽  
Jason Okonofua

Access to education is important for success as an adult. Exclusionary discipline (e.g., suspensions) reduces opportunities for students to complete their education and be strong candidates for future jobs. Black students face a disproportionately high risk of disciplinary action. Thus, it is important to understand when and how racial disparities in suspensions emerge in order to reduce their disproportionate negative impacts on Black students. Past research found racial disparities emerge after two misbehaviors among teachers and just a single misbehavior among assistant principals. The current research tests the generalizability of racial disparities in discipline from principals across the United States and a psychological process that potentially contributes to the racial disparities: their perception of their professional role relative to that of teachers. In this procedure and with a diverse sample, principals did not endorse significantly different amounts of discipline for Black and White students. We explore potential explanations of these null results in the discussion.


2011 ◽  
Vol 104 (9) ◽  
pp. 640-646 ◽  
Author(s):  
Dawn Turner ◽  
Pippa Simpson ◽  
Shun-Hwa Li ◽  
Matthew Scanlon ◽  
Michael W. Quasney

2021 ◽  
Vol 14 (8) ◽  
pp. e243486
Author(s):  
Dmitriy Stasishin ◽  
Patrick Schaffer ◽  
Zeryab Khan ◽  
Christie Murphy

Diabetic ketoacidosis (DKA) and hyponatraemia associated with beer potomania are severe diagnoses warranting intensive care level management. Our patient, a middle-aged man, with a history of chronic alcohol abuse and insulin non-compliance, presents with severe DKA and severe hyponatraemia. Correcting sodium and metabolic derangements in each disorder require significant attention to fluid and electrolyte levels. Combined they prove challenging and require an individualised approach to prevent the overcorrection of sodium. Furthermore, management of these conditions lends to the importance of understanding the pathophysiology behind their hormonal and osmotic basis.


10.2196/22400 ◽  
2020 ◽  
Vol 6 (4) ◽  
pp. e22400
Author(s):  
Angier Allen ◽  
Samson Mataraso ◽  
Anna Siefkas ◽  
Hoyt Burdick ◽  
Gregory Braden ◽  
...  

Background Racial disparities in health care are well documented in the United States. As machine learning methods become more common in health care settings, it is important to ensure that these methods do not contribute to racial disparities through biased predictions or differential accuracy across racial groups. Objective The goal of the research was to assess a machine learning algorithm intentionally developed to minimize bias in in-hospital mortality predictions between white and nonwhite patient groups. Methods Bias was minimized through preprocessing of algorithm training data. We performed a retrospective analysis of electronic health record data from patients admitted to the intensive care unit (ICU) at a large academic health center between 2001 and 2012, drawing data from the Medical Information Mart for Intensive Care–III database. Patients were included if they had at least 10 hours of available measurements after ICU admission, had at least one of every measurement used for model prediction, and had recorded race/ethnicity data. Bias was assessed through the equal opportunity difference. Model performance in terms of bias and accuracy was compared with the Modified Early Warning Score (MEWS), the Simplified Acute Physiology Score II (SAPS II), and the Acute Physiologic Assessment and Chronic Health Evaluation (APACHE). Results The machine learning algorithm was found to be more accurate than all comparators, with a higher sensitivity, specificity, and area under the receiver operating characteristic. The machine learning algorithm was found to be unbiased (equal opportunity difference 0.016, P=.20). APACHE was also found to be unbiased (equal opportunity difference 0.019, P=.11), while SAPS II and MEWS were found to have significant bias (equal opportunity difference 0.038, P=.006 and equal opportunity difference 0.074, P<.001, respectively). Conclusions This study indicates there may be significant racial bias in commonly used severity scoring systems and that machine learning algorithms may reduce bias while improving on the accuracy of these methods.


Sign in / Sign up

Export Citation Format

Share Document