SOTAVerified

Diagnostic Tool for Out-of-Sample Model Evaluation

2022-06-22Code Available0· sign in to hype

Ludvig Hult, Dave Zachariah, Petre Stoica

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Assessment of model fitness is a key part of machine learning. The standard paradigm is to learn models by minimizing a chosen loss function averaged over training data, with the aim of achieving small losses on future data. In this paper, we consider the use of a finite calibration data set to characterize the future, out-of-sample losses of a model. We propose a simple model diagnostic tool that provides finite-sample guarantees under weak assumptions. The tool is simple to compute and to interpret. Several numerical experiments are presented to show how the proposed method quantifies the impact of distribution shifts, aids the analysis of regression, and enables model selection as well as hyper-parameter tuning.

Tasks

Reproductions