SOTAVerified

Towards Fair In-Context Learning with Tabular Foundation Models

2025-05-14Code Available0· sign in to hype

Patrik Kenfack, Samira Ebrahimi Kahou, Ulrich Aïvodji

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Tabular foundational models have exhibited strong in-context learning (ICL) capabilities on structured data, allowing them to make accurate predictions on test sets without parameter updates, using training examples as context. This emerging approach positions itself as a competitive alternative to traditional gradient-boosted tree methods. However, while biases in conventional machine learning models are well documented, it remains unclear how these biases manifest in tabular ICL. The paper investigates the fairness implications of tabular ICL and explores three preprocessing strategies--correlation removal, group-balanced demonstration selection, and uncertainty-based demonstration selection--to address bias. Comprehensive experiments indicate that uncertainty-based demonstration selection consistently enhances group fairness of in-context predictions. The source code for reproducing the results of this work can be found at https://github.com/patrikken/Fair-TabICL.

Tasks

Reproductions