SOTAVerified

Differentially Private Multi-Armed Bandits in the Shuffle Model

2021-06-05NeurIPS 2021Unverified0· sign in to hype

Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We give an (,)-differentially private algorithm for the multi-armed bandit (MAB) problem in the shuffle model with a distribution-dependent regret of O((_a [k]:_a>0 T_a)+k1 T), and a distribution-independent regret of O(kT T+k1 T), where T is the number of rounds, _a is the suboptimality gap of the arm a, and k is the total number of arms. Our upper bound almost matches the regret of the best known algorithms for the centralized model, and significantly outperforms the best known algorithm in the local model.

Tasks

Reproductions