SOTAVerified

Impact of Data Distribution on Fairness Guarantees in Equitable Deep Learning

2024-12-29Code Available0· sign in to hype

Yan Luo, Congcong Wen, Min Shi, Hao Huang, Yi Fang, Mengyu Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a comprehensive theoretical framework analyzing the relationship between data distributions and fairness guarantees in equitable deep learning. Our work establishes novel theoretical bounds that explicitly account for data distribution heterogeneity across demographic groups, while introducing a formal analysis framework that minimizes expected loss differences across these groups. We derive comprehensive theoretical bounds for fairness errors and convergence rates, and characterize how distributional differences between groups affect the fundamental trade-off between fairness and accuracy. Through extensive experiments on diverse datasets, including FairVision (ophthalmology), CheXpert (chest X-rays), HAM10000 (dermatology), and FairFace (facial recognition), we validate our theoretical findings and demonstrate that differences in feature distributions across demographic groups significantly impact model fairness, with performance disparities particularly pronounced in racial categories. The theoretical bounds we derive crroborate these empirical observations, providing insights into the fundamental limits of achieving fairness in deep learning models when faced with heterogeneous data distributions. This work advances our understanding of fairness in AI-based diagnosis systems and provides a theoretical foundation for developing more equitable algorithms. The code for analysis is publicly available via https://github.com/Harvard-Ophthalmology-AI-Lab/fairness_guarantees.

Tasks

Reproductions