How do the Existing Fairness Metrics and Unfairness Mitigation Algorithms contribute to Ethical Learning Analytics?
With the widespread use of learning analytics (LA), ethical concerns about fairness havebeen raised. Research shows that LA models may be biased against students of certaindemographic groups. Although fairness has gained significant attention in the broadermachine learning (ML) community in the last decade, it is only recently that attentionhas been paid to fairness in LA. Furthermore, the decision on which unfairness mitigationalgorithm or metric to use in a particular context remains largely unknown. On thispremise, we performed a comparative evaluation of some selected unfairness mitigationalgorithms regarded in the fair ML community to have shown promising results. Using a3-year program dropout data from an Australian university, we comparatively evaluatedhow the unfairness mitigation algorithms contribute to ethical LA by testing for somehypotheses across fairness and performance metrics. Interestingly, our results show howdata bias does not always necessarily result in predictive bias. Perhaps not surprisingly,our test for fairness-utility tradeoff shows how ensuring fairness does not always lead todrop in utility. Indeed, our results show that ensuring fairness might lead to enhanced utilityunder specific circumstance. Our findings may to some extent, guide fairness algorithmand metric selection for a given context.