algorithmic bias
Recently Published Documents


TOTAL DOCUMENTS

129
(FIVE YEARS 112)

H-INDEX

10
(FIVE YEARS 6)

2022 ◽  
Vol 59 (1) ◽  
pp. 102791
Author(s):  
Ludovico Boratto ◽  
Stefano Faralli ◽  
Mirko Marras ◽  
Giovanni Stilo

Author(s):  
Tobias Röhl

The introduction of artificial intelligence (AI) and other tools, based on algorithmic decision-making in education, not only provides opportunities but can also lead to ethical problems, such as algorithmic bias and a deskilling of teachers. In this essay I will show how these risks can be mitigated.


2021 ◽  
pp. 1-30
Author(s):  
Robert Long

Abstract As machine learning informs increasingly consequential decisions, different metrics have been proposed for measuring algorithmic bias or unfairness. Two popular “fairness measures” are calibration and equality of false positive rate. Each measure seems intuitively important, but notably, it is usually impossible to satisfy both measures. For this reason, a large literature in machine learning speaks of a “fairness tradeoff” between these two measures. This framing assumes that both measures are, in fact, capturing something important. To date, philosophers have seldom examined this crucial assumption, and examined to what extent each measure actually tracks a normatively important property. This makes this inevitable statistical conflict – between calibration and false positive rate equality – an important topic for ethics. In this paper, I give an ethical framework for thinking about these measures and argue that, contrary to initial appearances, false positive rate equality is in fact morally irrelevant and does not measure fairness.


Author(s):  
Lisa Herzog

The chapter discusses the problem of algorithmic bias in decision-making processes that determine access to opportunities, such as recidivism scores, college admission decisions, or loan scores. After describing the technical bases of algorithmic bias, it asks how to evaluate them, drawing on Iris Marion Young’s perspective of structural (in)justice. The focus is in particular on the risk of so-called ‘Matthew effects’, in which privileged individuals gain more advantages, while those who are already disadvantaged suffer further. Some proposed solutions are discussed, with an emphasis on the need to take a broad, interdisciplinary perspective rather than a purely technical perspective. The chapter also replies to the objection that private firms cannot be held responsible for addressing structural injustices and concludes by emphasizing the need for political and social action.


2021 ◽  
Vol 29 (6) ◽  
pp. 1-27 ◽  
Author(s):  
Shahriar Akter ◽  
Yogesh K. Dwivedi ◽  
Kumar Biswas ◽  
Katina Michael ◽  
Ruwan J. Bandara ◽  
...  

Research on AI has gained momentum in recent years. Many scholars and practitioners increasingly highlight the dark sides of AI, particularly related to algorithm bias. This study elucidates situations in which AI-enabled analytics systems make biased decisions against customers based on gender, race, religion, age, nationality or socioeconomic status. Based on a systematic literature review, this research proposes two approaches (i.e., a priori and post-hoc) to overcome such biases in customer management. As part of a priori approach, the findings suggest scientific, application, stakeholder and assurance consistencies. With regard to the post-hoc approach, the findings recommend six steps: bias identification, review of extant findings, selection of the right variables, responsible and ethical model development, data analysis and action on insights. Overall, this study contributes to the ethical and responsible use of AI applications.


2021 ◽  
Author(s):  
Hossein Estiri ◽  
Zachary Strasser ◽  
Sina Rashidian ◽  
Jeffrey Klann ◽  
Kavishwar Wagholikar ◽  
...  

The growing recognition of algorithmic bias has spurred discussions about fairness in artificial intelligence (AI) / machine learning (ML) algorithms. The increasing translation of predictive models into clinical practice brings an increased risk of direct harm from algorithmic bias; however, bias remains incompletely measured in many medical AI applications. Using data from over 56 thousand Mass General Brigham (MGB) patients with confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), we evaluate unrecognized bias in four AI models developed during the early months of the pandemic in Boston, Massachusetts that predict risks of hospital admission, ICU admission, mechanical ventilation, and death after a SARS-CoV-2 infection purely based on their pre-infection longitudinal medical records. We discuss that while a model can be biased against certain protected groups (i.e., perform worse) in certain tasks, it can be at the same time biased towards another protected group (i.e., perform better). As such, current bias evaluation studies may lack a full depiction of the variable effects of a model on its subpopulations. If the goal is to make a change in a positive way, the underlying roots of bias need to be fully explored in medical AI. Only a holistic evaluation, a diligent search for unrecognized bias, can provide enough information for an unbiased judgment of AI bias that can invigorate follow-up investigations on identifying the underlying roots of bias and ultimately make a change.


Sign in / Sign up

Export Citation Format

Share Document