SOTAVerified

Locally Differentially Private Reinforcement Learning for Linear Mixture Markov Decision Processes

2021-10-19Unverified0· sign in to hype

Chonghua Liao, Jiafan He, Quanquan Gu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Reinforcement learning (RL) algorithms can be used to provide personalized services, which rely on users' private and sensitive data. To protect the users' privacy, privacy-preserving RL algorithms are in demand. In this paper, we study RL with linear function approximation and local differential privacy (LDP) guarantees. We propose a novel (, )-LDP algorithm for learning a class of Markov decision processes (MDPs) dubbed linear mixture MDPs, and obtains an O( d^5/4H^7/4T^3/4((1/))^1/41/) regret, where d is the dimension of feature mapping, H is the length of the planning horizon, and T is the number of interactions with the environment. We also prove a lower bound (dHT/(e^(e^-1))) for learning linear mixture MDPs under -LDP constraint. Experiments on synthetic datasets verify the effectiveness of our algorithm. To the best of our knowledge, this is the first provable privacy-preserving RL algorithm with linear function approximation.

Tasks

Reproductions