SOTAVerified

Smaller World Models for Reinforcement Learning

2020-10-12Unverified0· sign in to hype

Jan Robine, Tobias Uelwer, Stefan Harmeling

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows 100K interactions with the real environment. We apply our method on 36 Atari environments and show that we reach comparable performance to their SimPLe algorithm, while our model is significantly smaller.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Atari 2600 Bank HeistDiscrete Latent Space World Model (VQ-VAE)Score121.6Unverified
Atari 2600 BreakoutDiscrete Latent Space World Model (VQ-VAE)Score11.6Unverified
Atari 2600 Crazy ClimberDiscrete Latent Space World Model (VQ-VAE)Score59,609.4Unverified
Atari 2600 FreewayDiscrete Latent Space World Model (VQ-VAE)Score29Unverified
Atari 2600 PongDiscrete Latent Space World Model (VQ-VAE)Score20.2Unverified
Atari 2600 SeaquestDiscrete Latent Space World Model (VQ-VAE)Score635Unverified

Reproductions