model bias
Recently Published Documents


TOTAL DOCUMENTS

238
(FIVE YEARS 79)

H-INDEX

25
(FIVE YEARS 5)

2022 ◽  
Vol 12 ◽  
Author(s):  
Samuel P. Border ◽  
Pinaki Sarder

While it is impossible to deny the performance gains achieved through the incorporation of deep learning (DL) and other artificial intelligence (AI)-based techniques in pathology, minimal work has been done to answer the crucial question of why these algorithms predict what they predict. Tracing back classification decisions to specific input features allows for the quick identification of model bias as well as providing additional information toward understanding underlying biological mechanisms. In digital pathology, increasing the explainability of AI models would have the largest and most immediate impact for the image classification task. In this review, we detail some considerations that should be made in order to develop models with a focus on explainability.


2021 ◽  
pp. 1-62
Author(s):  
Qi Tang ◽  
Noel D. Keen ◽  
Jean-Christophe Golaz ◽  
Luke P. van Roekel

Abstract We evaluate the simulated teleconnection of El Niño Southern Oscillation (ENSO) to winter season precipitation extremes over the United States in a long (98 years) 1950-control high resolution version (HR, 25 km nominal atmosphere model horizontal resolution) of US Department of Energy’s (DOE) Energy Exascale Earth System Model version 1 (E3SMv1). Model bias and spatial pattern of ENSO teleconnections to mean and extreme precipitation in HR overall are similar to the low-resolution model’s (LR, 110 km) historical simulation (4-member ensemble, 1925-1959). However, over the Southeast US (SE-US), HR produces stronger El Niño associated extremes, reducing upon LR’s model bias. Both LR and HR produce weaker than observed increase in storm track activity during El Niño events there. But, HR improves the ENSO associated variability of moisture transport over SE-US. During El Niño, stronger vertical velocities in HR produce stronger large-scale precipitation causing larger latent heating of the troposphere that pulls in more moisture from the Gulf of Mexico into the SE-US. This positive feedback also contributes to the stronger mean and extreme precipitation response in HR. Over the Pacific Northwest, LR’s bias of stronger than observed La Niña associated extremes is amplified in HR. Both models simulate stronger than observed moisture transport from the Pacific Ocean into the region during La Niña years. The amplified HR bias there is due to stronger orographically driven vertical updrafts that create stronger large scale precipitation, despite weaker La Niña induced storm track activity.


2021 ◽  
Vol 74 ◽  
pp. 102225
Author(s):  
Beatriz Garcia Santa Cruz ◽  
Matías Nicolás Bossa ◽  
Jan Sölter ◽  
Andreas Dominik Husch

2021 ◽  
Vol 12 ◽  
Author(s):  
Heewon Chung ◽  
Chul Park ◽  
Wu Seong Kang ◽  
Jinseok Lee

Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a model using datasets for only one particular gender and aim to present new insights into the bias issue. For the investigation, we considered an AI model that predicts severity at an early stage based on the medical records of coronavirus disease (COVID-19) patients. For 5,601 confirmed COVID-19 patients, we used 37 medical records, namely, basic patient information, physical index, initial examination findings, clinical findings, comorbidity diseases, and general blood test results at an early stage. To investigate the gender-based AI model bias, we trained and evaluated two separate models—one that was trained using only the male group, and the other using only the female group. When the model trained by the male-group data was applied to the female testing data, the overall accuracy decreased—sensitivity from 0.93 to 0.86, specificity from 0.92 to 0.86, accuracy from 0.92 to 0.86, balanced accuracy from 0.93 to 0.86, and area under the curve (AUC) from 0.97 to 0.94. Similarly, when the model trained by the female-group data was applied to the male testing data, once again, the overall accuracy decreased—sensitivity from 0.97 to 0.90, specificity from 0.96 to 0.91, accuracy from 0.96 to 0.91, balanced accuracy from 0.96 to 0.90, and AUC from 0.97 to 0.95. Furthermore, when we evaluated each gender-dependent model with the test data from the same gender used for training, the resultant accuracy was also lower than that from the unbiased model.


Author(s):  
Jörg F. Unger ◽  
Isabela Coelho Lima ◽  
Abbas Jafari ◽  
Thomas Titscher ◽  
Annika Robens-Radermacher

2021 ◽  
Author(s):  
Diederick Vermetten ◽  
Bas van Stein ◽  
Fabio Caraffini ◽  
Leandro Minku ◽  
Anna V. Kononova

Benchmarking heuristic algorithms is vital to understand under which conditions and on what kind of problems certain algorithms perform well. Most benchmarks are performance-based, to test algorithm performance under a wide set of conditions. There are also resource- and behaviour-based benchmarks to test the resource consumption and the behaviour of algorithms. In this article, we propose a novel behaviour-based benchmark toolbox: BIAS (Bias in Algorithms, Structural). This toolbox can detect structural bias per dimension and across dimension based on 39 statistical tests. Moreover, it predicts the type of structural bias using a Random Forest model. BIAS can be used to better understand and improve existing algorithms (removing bias) as well as to test novel algorithms for structural bias in an early phase of development. Experiments with a large set of generated structural bias scenarios show that BIAS was successful in identifying bias. In addition we also provide the results of BIAS on 432 existing state-of-the-art optimisation algorithms showing that different kinds of structural bias are present in these algorithms, mostly towards the centre of the objective space or showing discretization behaviour. The proposed toolbox is made available open-source and recommendations are provided for the sample size and hyper-parameters to be used when applying the toolbox on other algorithms.


2021 ◽  
Author(s):  
Diederick Vermetten ◽  
Bas van Stein ◽  
Fabio Caraffini ◽  
Leandro Minku ◽  
Anna V. Kononova

Benchmarking heuristic algorithms is vital to understand under which conditions and on what kind of problems certain algorithms perform well. Most benchmarks are performance-based, to test algorithm performance under a wide set of conditions. There are also resource- and behaviour-based benchmarks to test the resource consumption and the behaviour of algorithms. In this article, we propose a novel behaviour-based benchmark toolbox: BIAS (Bias in Algorithms, Structural). This toolbox can detect structural bias per dimension and across dimension based on 39 statistical tests. Moreover, it predicts the type of structural bias using a Random Forest model. BIAS can be used to better understand and improve existing algorithms (removing bias) as well as to test novel algorithms for structural bias in an early phase of development. Experiments with a large set of generated structural bias scenarios show that BIAS was successful in identifying bias. In addition we also provide the results of BIAS on 432 existing state-of-the-art optimisation algorithms showing that different kinds of structural bias are present in these algorithms, mostly towards the centre of the objective space or showing discretization behaviour. The proposed toolbox is made available open-source and recommendations are provided for the sample size and hyper-parameters to be used when applying the toolbox on other algorithms.


2021 ◽  
Vol 104 ◽  
pp. 104366
Author(s):  
Phillip Swazinna ◽  
Steffen Udluft ◽  
Thomas Runkler

Author(s):  
Remi Madelon ◽  
Nemesio J. Rodriguez-Fernandez ◽  
Robin Van der Schalie ◽  
Y. Kerr ◽  
A. Albitar ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document