Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs
Shinji Ito, Taira Tsuchiya, Junya Honda
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This study considers online learning with general directed feedback graphs. For this problem, we present best-of-both-worlds algorithms that achieve nearly tight regret bounds for adversarial environments as well as poly-logarithmic regret bounds for stochastic environments. As Alon et al. [2015] have shown, tight regret bounds depend on the structure of the feedback graph: strongly observable graphs yield minimax regret of ( ^1/2 T^1/2 ), while weakly observable graphs induce minimax regret of ( ^1/3 T^2/3 ), where and , respectively, represent the independence number of the graph and the domination number of a certain portion of the graph. Our proposed algorithm for strongly observable graphs has a regret bound of O( ^1/2 T^1/2 ) for adversarial environments, as well as of O ( ( T)^3 _ ) for stochastic environments, where _ expresses the minimum suboptimality gap. This result resolves an open question raised by Erez and Koren [2021]. We also provide an algorithm for weakly observable graphs that achieves a regret bound of O( ^1/3T^2/3 ) for adversarial environments and poly-logarithmic regret for stochastic environments. The proposed algorithms are based on the follow-the-regularized-leader approach combined with newly designed update rules for learning rates.