SOTAVerified

Decision Transformer: Reinforcement Learning via Sequence Modeling

2021-06-02NeurIPS 2021Code Available1· sign in to hype

Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Atari 2600 BreakoutDTScore267.5Unverified
Atari 2600 PongDTScore17.1Unverified
Atari 2600 Q*BertDTScore25.1Unverified
Atari 2600 SeaquestDTScore2.4Unverified

Reproductions