SOTAVerified

Wasserstein-based fairness interpretability framework for machine learning models

2020-11-06Unverified0· sign in to hype

Alexey Miroshnikov, Konstandinos Kotsiopoulos, Ryan Franks, Arjun Ravi Kannan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The objective of this article is to introduce a fairness interpretability framework for measuring and explaining the bias in classification and regression models at the level of a distribution. In our work, we measure the model bias across sub-population distributions in the model output using the Wasserstein metric. To properly quantify the contributions of predictors, we take into account the favorability of both the model and predictors with respect to the non-protected class. The quantification is accomplished by the use of transport theory, which gives rise to the decomposition of the model bias and bias explanations to positive and negative contributions. To gain more insight into the role of favorability and allow for additivity of bias explanations, we adapt techniques from cooperative game theory.

Tasks

Reproductions