SOTAVerified

Mean Actor Critic

2017-09-01Code Available0· sign in to hype

Cameron Allen, Kavosh Asadi, Melrose Roderick, Abdel-rahman Mohamed, George Konidaris, Michael Littman

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Atari 2600 Beam RiderMACScore6,072Unverified
Atari 2600 BreakoutMACScore372.7Unverified
Atari 2600 PongMACScore10.6Unverified
Atari 2600 Q*BertMACScore243.4Unverified
Atari 2600 SeaquestMACScore1,703.4Unverified
Atari 2600 Space InvadersMACScore1,173.1Unverified

Reproductions