SOTAVerified

Balancing Value Underestimation and Overestimation with Realistic Actor-Critic

2021-10-19Code Available0· sign in to hype

Sicen Li, Qinyun Tang, Yiming Pang, Xinmeng Ma, Gang Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Model-free deep reinforcement learning (RL) has been successfully applied to challenging continuous control domains. However, poor sample efficiency prevents these methods from being widely used in real-world domains. This paper introduces a novel model-free algorithm, Realistic Actor-Critic(RAC), which can be incorporated with any off-policy RL algorithms to improve sample efficiency. RAC employs Universal Value Function Approximators (UVFA) to simultaneously learn a policy family with the same neural network, each with different trade-offs between underestimation and overestimation. To learn such policies, we introduce uncertainty punished Q-learning, which uses uncertainty from the ensembling of multiple critics to build various confidence-bounds of Q-function. We evaluate RAC on the MuJoCo benchmark, achieving 10x sample efficiency and 25\% performance improvement on the most challenging Humanoid environment compared to SAC.

Tasks

Reproductions