SOTAVerified

Adaptive Transformers in RL

2020-04-08Code Available1· sign in to hype

Shakti Kumar, Jerrod Parker, Panteha Naderian

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent developments in Transformers have opened new interesting areas of research in partially observable reinforcement learning tasks. Results from late 2019 showed that Transformers are able to outperform LSTMs on both memory intense and reactive tasks. In this work we first partially replicate the results shown in Stabilizing Transformers in RL on both reactive and memory based environments. We then show performance improvement coupled with reduced computation when adding adaptive attention span to this Stable Transformer on a challenging DMLab30 environment. The code for all our experiments and models is available at https://github.com/jerrodparker20/adaptive-transformers-in-rl.

Tasks

Reproductions