SOTAVerified

Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR

2023-02-07Unverified0· sign in to hype

Kaiwen Wang, Nathan Kallus, Wen Sun

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we study risk-sensitive Reinforcement Learning (RL), focusing on the objective of Conditional Value at Risk (CVaR) with risk tolerance . Starting with multi-arm bandits (MABs), we show the minimax CVaR regret rate is (^-1AK), where A is the number of actions and K is the number of episodes, and that it is achieved by an Upper Confidence Bound algorithm with a novel Bernstein bonus. For online RL in tabular Markov Decision Processes (MDPs), we show a minimax regret lower bound of (^-1SAK) (with normalized cumulative rewards), where S is the number of states, and we propose a novel bonus-driven Value Iteration procedure. We show that our algorithm achieves the optimal regret of O(^-1SAK) under a continuity assumption and in general attains a near-optimal regret of O(^-1SAK), which is minimax-optimal for constant . This improves on the best available bounds. By discretizing rewards appropriately, our algorithms are computationally efficient.

Tasks

Reproductions