SOTAVerified

Combinatorial Logistic Bandits

2024-10-22Code Available0· sign in to hype

Xutong Liu, Xiangxiang Dai, Xuchuang Wang, Mohammad Hajiesmaili, John C. S. Lui

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a novel framework called combinatorial logistic bandits (CLogB), where in each round, a subset of base arms (called the super arm) is selected, with the outcome of each base arm being binary and its expectation following a logistic parametric model. The feedback is governed by a general arm triggering process. Our study covers CLogB with reward functions satisfying two smoothness conditions, capturing application scenarios such as online content delivery, online learning to rank, and dynamic channel allocation. We first propose a simple yet efficient algorithm, CLogUCB, utilizing a variance-agnostic exploration bonus. Under the 1-norm triggering probability modulated (TPM) smoothness condition, CLogUCB achieves a regret bound of O(d KT), where O ignores logarithmic factors, d is the dimension of the feature vector, represents the nonlinearity of the logistic model, and K is the maximum number of base arms a super arm can trigger. This result improves on prior work by a factor of O(). We then enhance CLogUCB with a variance-adaptive version, VA-CLogUCB, which attains a regret bound of O(dKT) under the same 1-norm TPM condition, improving another O() factor. VA-CLogUCB shows even greater promise under the stronger triggering probability and variance modulated (TPVM) condition, achieving a leading O(dT) regret, thus removing the additional dependency on the action-size K. Furthermore, we enhance the computational efficiency of VA-CLogUCB by eliminating the nonconvex optimization process when the context feature map is time-invariant while maintaining the tight O(dT) regret. Finally, experiments on synthetic and real-world datasets demonstrate the superior performance of our algorithms compared to benchmark algorithms.

Tasks

Reproductions