SOTAVerified

Locally Differentially Private (Contextual) Bandits Learning

2020-06-01NeurIPS 2020Code Available0· sign in to hype

Kai Zheng, Tianle Cai, Weiran Huang, Zhenguo Li, Li-Wei Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We study locally differentially private (LDP) bandits learning in this paper. First, we propose simple black-box reduction frameworks that can solve a large family of context-free bandits learning problems with LDP guarantee. Based on our frameworks, we can improve previous best results for private bandits learning with one-point feedback, such as private Bandits Convex Optimization, and obtain the first result for Bandits Convex Optimization (BCO) with multi-point feedback under LDP. LDP guarantee and black-box nature make our frameworks more attractive in real applications compared with previous specifically designed and relatively weaker differentially private (DP) context-free bandits algorithms. Further, we extend our (, )-LDP algorithm to Generalized Linear Bandits, which enjoys a sub-linear regret O(T^3/4/) and is conjectured to be nearly optimal. Note that given the existing (T) lower bound for DP contextual linear bandits (Shariff & Sheffe, 2018), our result shows a fundamental difference between LDP and DP contextual bandits learning.

Tasks

Reproductions