Online Linear Regression in Dynamic Environments via Discounting
2024-05-29Unverified0· sign in to hype
Andrew Jacobsen, Ashok Cutkosky
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We develop algorithms for online linear regression which achieve optimal static and dynamic regret guarantees even in the complete absence of prior knowledge. We present a novel analysis showing that a discounted variant of the Vovk-Azoury-Warmuth forecaster achieves dynamic regret of the form R_T(u) O(d(T) dP_T^(u)T), where P_T^(u) is a measure of variability of the comparator sequence, and show that the discount factor achieving this result can be learned on-the-fly. We show that this result is optimal by providing a matching lower bound. We also extend our results to strongly-adaptive guarantees which hold over every sub-interval [a,b][1,T] simultaneously.