Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Ilya Kostrikov, Denis Yarats, Rob Fergus
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/denisyarats/drqOfficialpytorch★ 419
- github.com/xingyu-lin/softagentpytorch★ 44
- github.com/microsoft/Mask-based-Latent-Reconstructionpytorch★ 29
- github.com/YaoMarkMu/DRQTRANSpytorch★ 1
Abstract
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to regularize the value function. Existing model-free approaches, such as Soft Actor-Critic (SAC), are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based (Dreamer, PlaNet, and SLAC) methods and recently proposed contrastive learning (CURL). Our approach can be combined with any model-free reinforcement learning algorithm, requiring only minor modifications. An implementation can be found at https://sites.google.com/view/data-regularized-q.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| DeepMind Cheetah Run (Images) | DrQ | Return | 660 | — | Unverified |
| DeepMind Cup Catch (Images) | DrQ | Return | 963 | — | Unverified |
| DeepMind Walker Walk (Images) | DrQ | Return | 921 | — | Unverified |