SOTAVerified

seeBias: A Comprehensive Tool for Assessing and Visualizing AI Fairness

2025-04-11Code Available0· sign in to hype

Yilin Ning, Yian Ma, Mingxuan Liu, Xin Li, Nan Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Fairness in artificial intelligence (AI) prediction models is increasingly emphasized to support responsible adoption in high-stakes domains such as health care and criminal justice. Guidelines and implementation frameworks highlight the importance of both predictive accuracy and equitable outcomes. However, current fairness toolkits often evaluate classification performance disparities in isolation, with limited attention to other critical aspects such as calibration. To address these gaps, we present seeBias, an R package for comprehensive evaluation of model fairness and predictive performance. seeBias offers an integrated evaluation across classification, calibration, and other performance domains, providing a more complete view of model behavior. It includes customizable visualizations to support transparent reporting and responsible AI implementation. Using public datasets from criminal justice and healthcare, we demonstrate how seeBias supports fairness evaluations, and uncovers disparities that conventional fairness metrics may overlook. The R package is available on GitHub, and a Python version is under development.

Tasks

Reproductions