SOTAVerified

Near-Optimal Algorithms for Omniprediction

2025-01-28Unverified0· sign in to hype

Princewill Okoroafor, Robert Kleinberg, Michael P. Kim

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Omnipredictors are simple prediction functions that encode loss-minimizing predictions with respect to a hypothesis class H, simultaneously for every loss function within a class of losses L. In this work, we give near-optimal learning algorithms for omniprediction, in both the online and offline settings. To begin, we give an oracle-efficient online learning algorithm that acheives (L,H)-omniprediction with O(T |H|) regret for any class of Lipschitz loss functions L L_Lip. Quite surprisingly, this regret bound matches the optimal regret for minimization of a single loss function (up to a (T) factor). Given this online algorithm, we develop an online-to-offline conversion that achieves near-optimal complexity across a number of measures. In particular, for all bounded loss functions within the class of Bounded Variation losses L_BV (which include all convex, all Lipschitz, and all proper losses) and any (possibly-infinite) H, we obtain an offline learning algorithm that, leveraging an (offline) ERM oracle and m samples from D, returns an efficient (L_BV,H,(m))-omnipredictor for (m) scaling near-linearly in the Rademacher complexity of Th H.

Tasks

Reproductions