SOTAVerified

A Fair Empirical Risk Minimization with Generalized Entropy

2022-02-24Code Available0· sign in to hype

Youngmi Jin, Jio Gim, Tae-Jin Lee, Young-Joo Suh

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper studies a parametric family of algorithmic fairness metrics, called generalized entropy, which originally has been used in public welfare and recently introduced to machine learning community. As a meaningful metric to evaluate algorithmic fairness, it requires that generalized entropy specify fairness requirements of a classification problem and the fairness requirements should be realized with small deviation by an algorithm. We investigate the role of generalized entropy as a design parameter for fair classification algorithm through a fair empirical risk minimization with a constraint specified in terms of generalized entropy. We theoretically and experimentally study learnability of the problem.

Tasks

Reproductions