Linear Contextual Bandits with Hybrid Payoff: Revisited
Nirjhar Das, Gaurav Sinha
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/nirjhar-das/hypay_banditsOfficialIn papernone★ 0
Abstract
We study the Linear Contextual Bandit problem in the hybrid reward setting. In this setting every arm's reward model contains arm specific parameters in addition to parameters shared across the reward models of all the arms. We can reduce this setting to two closely related settings (a) Shared - no arm specific parameters, and (b) Disjoint - only arm specific parameters, enabling the application of two popular state of the art algorithms - LinUCB and DisLinUCB (Algorithm 1 in (Li et al. 2010)). When the arm features are stochastic and satisfy a popular diversity condition, we provide new regret analyses for both algorithms, significantly improving on the known regret guarantees of these algorithms. Our novel analysis critically exploits the hybrid reward structure and the diversity condition. Moreover, we introduce a new algorithm HyLinUCB that crucially modifies LinUCB (using a new exploration coefficient) to account for sparsity in the hybrid setting. Under the same diversity assumptions, we prove that HyLinUCB also incurs only O(T) regret for T rounds. We perform extensive experiments on synthetic and real-world datasets demonstrating strong empirical performance of HyLinUCB.For number of arm specific parameters much larger than the number of shared parameters, we observe that DisLinUCB incurs the lowest regret. In this case, regret of HyLinUCB is the second best and extremely competitive to DisLinUCB. In all other situations, including our real-world dataset, HyLinUCB has significantly lower regret than LinUCB, DisLinUCB and other SOTA baselines we considered. We also empirically observe that the regret of HyLinUCB grows much slower with the number of arms compared to baselines, making it suitable even for very large action spaces.