SOTAVerified

L_q regularization for Fairness AI robust to sampling bias

2021-09-29Unverified0· sign in to hype

Yongdai Kim, Sara Kim, Seonghyeon Kim, Kunwoong Kim

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

It is well recognized that historical biases exist in training data against a certain sensitive group (e.g., non-white, women) which are socially unacceptable, and these unfair biases are inherited to trained AI models. Various learning algorithms have been proposed to remove or alleviate unfair biases in trained AI models. In this paper, we consider another type of bias in training data so-called sampling bias in view of fairness AI. Here, sampling bias means that training data do not represent well the population of interest. Sampling bias occurs when special sampling designs (e.g., stratified sampling) are used when collecting training data, or the population where training data are collected is different from the population of interest. When sampling bias exists, fair AI models on training data may not be fair in test data. To ensure fairness on test data, we develop computationally efficient learning algorithms robust to sampling bias. In particular, we propose a robust fairness constraint based on the L_q norm which is a generic algorithm to be applied to various fairness AI problems without much hamper. By analyzing multiple benchmark data sets, we show that our proposed robust fairness AI algorithm improves existing fair AI algorithms much in terms of the robustness to sampling bias and has significant computational advantages compared to other robust fair AI algorithms.

Tasks

Reproductions