bias detection
Recently Published Documents


TOTAL DOCUMENTS

114
(FIVE YEARS 37)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
pp. 100020
Author(s):  
Elhanan Mishraky ◽  
Aviv Ben Arie ◽  
Yair Horesh ◽  
Shir Meir Lador
Keyword(s):  

2021 ◽  
Author(s):  
Havvanur Dervisoglu ◽  
Mehmet Fatih Amasyali

Author(s):  
Michaela Hardt ◽  
Xiaoguang Chen ◽  
Xiaoyi Cheng ◽  
Michele Donini ◽  
Jason Gelman ◽  
...  

2021 ◽  
Author(s):  
Marco Pacini ◽  
Federico Nesti ◽  
Alessandro Biondi ◽  
Giorgio Buttazzo
Keyword(s):  

Author(s):  
Ashish Garg ◽  
Dr. Rajesh SL

Data Scientists nowadays make extensive use of black-box AI models (such as Neural Networks and the various ensemble techniques) to solve various business problems. Though these models often provide higher accuracy, these models are also less explanatory at the same time and hence more prone to bias. Further, AI systems rely upon the available training data and hence remain prone to data bias as well. Many sensitive attributes such as race, religion, gender, ethnicity, etc. can form the basis of unethical bias in data or the algorithm. As the world is becoming more and more dependent on AI algorithms for making a wide range of decisions such as to determine access to services such as credit, insurance, and employment, the fairness & ethical aspects of the models are becoming increasingly important. There are many bias detection & mitigation algorithms which have evolved and many of the algorithms handle indirect attributes as well without requiring to explicitly identify them. However, these algorithms have gaps and do not quantify the indirect bias. This paper discusses the various bias detection methodologies and various tools/ libraries to detect & mitigate bias. Thereafter, this paper presents a new methodical approach to detect and quantify indirect bias in an AI/ ML models.


Author(s):  
Daphna Keidar ◽  
Mian Zhong ◽  
Ce Zhang ◽  
Yash Raj Shrestha ◽  
Bibek Paudel

Sign in / Sign up

Export Citation Format

Share Document