SOTAVerified

On Distributed Cooperative Decision-Making in Multiarmed Bandits

2015-12-21Unverified0· sign in to hype

Peter Landgren, Vaibhav Srivastava, Naomi Ehrich Leonard

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the explore-exploit tradeoff in distributed cooperative decision-making using the context of the multiarmed bandit (MAB) problem. For the distributed cooperative MAB problem, we design the cooperative UCB algorithm that comprises two interleaved distributed processes: (i) running consensus algorithms for estimation of rewards, and (ii) upper-confidence-bound-based heuristics for selection of arms. We rigorously analyze the performance of the cooperative UCB algorithm and characterize the influence of communication graph structure on the decision-making performance of the group.

Tasks

Reproductions