Predicting Worker Accuracy from Nonverbal Behaviour: Benefits and Potential for Algorithmic Bias

2021 ◽  
Author(s):  
Yuushi Toyoda ◽  
Gale Lucas ◽  
Jonathan Gratch
interactions ◽  
2018 ◽  
Vol 25 (6) ◽  
pp. 58-63 ◽  
Author(s):  
Henriette Cramer ◽  
Jean Garcia-Gathright ◽  
Aaron Springer ◽  
Sravana Reddy
Keyword(s):  

2021 ◽  
pp. 146144482110127
Author(s):  
Marcus Carter ◽  
Ben Egliston

Virtual reality (VR) is an emerging technology with the potential to extract significantly more data about learners and the learning process. In this article, we present an analysis of how VR education technology companies frame, use and analyse this data. We found both an expansion and acceleration of what data are being collected about learners and how these data are being mobilised in potentially discriminatory and problematic ways. Beyond providing evidence for how VR represents an intensification of the datafication of education, we discuss three interrelated critical issues that are specific to VR: the fantasy that VR data is ‘perfect’, the datafication of soft-skills training, and the commercialisation and commodification of VR data. In the context of the issues identified, we caution the unregulated and uncritical application of learning analytics to the data that are collected from VR training.


2005 ◽  
Vol 40 (2) ◽  
pp. 90-99 ◽  
Author(s):  
Takahiro Higuchi ◽  
Ken Shoji ◽  
Sumie Taguchi ◽  
Toshiteru Hatayama

2017 ◽  
Vol 33 (2) ◽  
pp. 109-117 ◽  
Author(s):  
A Marono ◽  
DD Clarke ◽  
J Navarro ◽  
DA Keatley

2021 ◽  
Author(s):  
Hossein Estiri ◽  
Zachary Strasser ◽  
Sina Rashidian ◽  
Jeffrey Klann ◽  
Kavishwar Wagholikar ◽  
...  

The growing recognition of algorithmic bias has spurred discussions about fairness in artificial intelligence (AI) / machine learning (ML) algorithms. The increasing translation of predictive models into clinical practice brings an increased risk of direct harm from algorithmic bias; however, bias remains incompletely measured in many medical AI applications. Using data from over 56 thousand Mass General Brigham (MGB) patients with confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), we evaluate unrecognized bias in four AI models developed during the early months of the pandemic in Boston, Massachusetts that predict risks of hospital admission, ICU admission, mechanical ventilation, and death after a SARS-CoV-2 infection purely based on their pre-infection longitudinal medical records. We discuss that while a model can be biased against certain protected groups (i.e., perform worse) in certain tasks, it can be at the same time biased towards another protected group (i.e., perform better). As such, current bias evaluation studies may lack a full depiction of the variable effects of a model on its subpopulations. If the goal is to make a change in a positive way, the underlying roots of bias need to be fully explored in medical AI. Only a holistic evaluation, a diligent search for unrecognized bias, can provide enough information for an unbiased judgment of AI bias that can invigorate follow-up investigations on identifying the underlying roots of bias and ultimately make a change.


Sign in / Sign up

Export Citation Format

Share Document