Standardized Interpretable Fairness Measures for Continuous Risk Scores
2023-08-22International Conference on Machine Learning 2024Code Available0· sign in to hype
Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/schufa-innovationlab/fair-scoringOfficialIn papernone★ 3
Abstract
We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points. We derive a link between the different families of existing fairness measures for scores and show that the proposed standardized fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss.