Multi-Agent Multi-Armed Bandits with Limited Communication
Mridul Agarwal, Vaneet Aggarwal, Kamyar Azizzadenesheli
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We consider the problem where N agents collaboratively interact with an instance of a stochastic K arm bandit problem for K N. The agents aim to simultaneously minimize the cumulative regret over all the agents for a total of T time steps, the number of communication rounds, and the number of bits in each communication round. We present Limited Communication Collaboration - Upper Confidence Bound (LCC-UCB), a doubling-epoch based algorithm where each agent communicates only after the end of the epoch and shares the index of the best arm it knows. With our algorithm, LCC-UCB, each agent enjoys a regret of O((K/N+ N)T), communicates for O( T) steps and broadcasts O( K) bits in each communication step. We extend the work to sparse graphs with maximum degree K_G, and diameter D and propose LCC-UCB-GRAPH which enjoys a regret bound of O(D(K/N+ K_G)DT). Finally, we empirically show that the LCC-UCB and the LCC-UCB-GRAPH algorithm perform well and outperform strategies that communicate through a central node