SOTAVerified

Diagnosing Model Performance Under Distribution Shift

2023-03-03Code Available0· sign in to hype

Tiffany Tianhui Cai, Hongseok Namkoong, Steve Yadlowsky

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prediction models can perform poorly when deployed to target distributions different from the training distribution. To understand these operational failure modes, we develop a method, called DIstribution Shift DEcomposition (DISDE), to attribute a drop in performance to different types of distribution shifts. Our approach decomposes the performance drop into terms for 1) an increase in harder but frequently seen examples from training, 2) changes in the relationship between features and outcomes, and 3) poor performance on examples infrequent or unseen during training. These terms are defined by fixing a distribution on X while varying the conditional distribution of Y X between training and target, or by fixing the conditional distribution of Y X while varying the distribution on X. In order to do this, we define a hypothetical distribution on X consisting of values common in both training and target, over which it is easy to compare Y X and thus predictive performance. We estimate performance on this hypothetical distribution via reweighting methods. Empirically, we show how our method can 1) inform potential modeling improvements across distribution shifts for employment prediction on tabular census data, and 2) help to explain why certain domain adaptation methods fail to improve model performance for satellite image classification.

Tasks

Reproductions