First- and Second-Order Bounds for Adversarial Linear Contextual Bandits
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We consider the adversarial linear contextual bandit setting, which allows for the loss functions associated with each of K arms to change over time without restriction. Assuming the d-dimensional contexts are drawn from a fixed known distribution, the worst-case expected regret over the course of T rounds is known to scale as O(Kd T). Under the additional assumption that the density of the contexts is log-concave, we obtain a second-order bound of order O(Kd V_T) in terms of the cumulative second moment of the learner's losses V_T, and a closely related first-order bound of order O(Kd L_T^*) in terms of the cumulative loss of the best policy L_T^*. Since V_T or L_T^* may be significantly smaller than T, these improve over the worst-case regret whenever the environment is relatively benign. Our results are obtained using a truncated version of the continuous exponential weights algorithm over the probability simplex, which we analyse by exploiting a novel connection to the linear bandit setting without contexts.