SOTAVerified

Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings

2021-03-04NeurIPS 2021Code Available1· sign in to hype

Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Atari 2600 AlienRainbow+SEERScore1,172.6Unverified
Atari 2600 AmidarRainbow+SEERScore250.5Unverified
Atari 2600 Bank HeistRainbow+SEERScore276.6Unverified
Atari 2600 Crazy ClimberRainbow+SEERScore28,066Unverified
Atari 2600 KrullRainbow+SEERScore3,277.5Unverified
Atari 2600 Q*BertQbert Rainbow+SEERScore4,123.5Unverified
Atari 2600 Road RunnerRainbow+SEERScore11,794Unverified
Atari 2600 SeaquestRainbow+SEERScore561.2Unverified

Reproductions