SOTAVerified

Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model

2022-10-24Code Available0· sign in to hype

Jean-Rémy Conti, Nathan Noiry, Vincent Despiegel, Stéphane Gentric, Stéphan Clémençon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, BFAR and BFRR, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias. The code used for the experiments can be found at https://github.com/JRConti/EthicalModule_vMF.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LFWArcFaceR50 + EM-FRRBFAR33.65Unverified
LFWArcFaceR50 + EM-CBFAR2.44Unverified
LFWArcFaceR50 + EM-FARBFAR2.11Unverified

Reproductions