SOTAVerified

Addressing Discretization-Induced Bias in Demographic Prediction

2024-05-27Code Available0· sign in to hype

Evan Dong, Aaron Schein, Yixin Wang, Nikhil Garg

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Racial and other demographic imputation is necessary for many applications, especially in auditing disparities and outreach targeting in political campaigns. The canonical approach is to construct continuous predictions -- e.g., based on name and geography -- and then to discretize the predictions by selecting the most likely class (argmax). We study how this practice produces discretization bias. In particular, we show that argmax labeling, as used by a prominent commercial voter file vendor to impute race/ethnicity, results in a substantial under-count of African-American voters, e.g., by 28.2% points in North Carolina. This bias can have substantial implications in downstream tasks that use such labels. We then introduce a joint optimization approach -- and a tractable data-driven thresholding heuristic -- that can eliminate this bias, with negligible individual-level accuracy loss. Finally, we theoretically analyze discretization bias, show that calibrated continuous models are insufficient to eliminate it, and that an approach such as ours is necessary. Broadly, we warn researchers and practitioners against discretizing continuous demographic predictions without considering downstream consequences.

Tasks

Reproductions