SOTAVerified

Parameter-free Regret in High Probability with Heavy Tails

2022-10-25Unverified0· sign in to hype

Jiujia Zhang, Ashok Cutkosky

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present new algorithms for online convex optimization over unbounded domains that obtain parameter-free regret in high-probability given access only to potentially heavy-tailed subgradient estimates. Previous work in unbounded domains considers only in-expectation results for sub-exponential subgradients. Unlike in the bounded domain case, we cannot rely on straight-forward martingale concentration due to exponentially large iterates produced by the algorithm. We develop new regularization techniques to overcome these problems. Overall, with probability at most , for all comparators u our algorithm achieves regret O(\| u \| T^1/p (1/)) for subgradients with bounded p^th moments for some p (1, 2].

Tasks

Reproductions