D2RL: Deep Dense Architectures in Reinforcement Learning
Samarth Sinha, Homanga Bharadhwaj, Aravind Srinivas, Animesh Garg
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/pairlab/d2rlOfficialpytorch★ 39
- github.com/BY571/Soft-Actor-Critic-and-Extensionspytorch★ 295
- github.com/mugoh/rl-base/tree/master/rlbase/d2rlpytorch★ 0
Abstract
While improvements in deep learning architectures have played a crucial role in improving the state of supervised and unsupervised learning in computer vision and natural language processing, neural network architecture choices for reinforcement learning remain relatively under-explored. We take inspiration from successful architectural choices in computer vision and generative modelling, and investigate the use of deeper networks and dense connections for reinforcement learning on a variety of simulated robotic learning benchmark environments. Our findings reveal that current methods benefit significantly from dense connections and deeper networks, across a suite of manipulation and locomotion tasks, for both proprioceptive and image-based observations. We hope that our results can serve as a strong baseline and further motivate future research into neural network architectures for reinforcement learning. The project website with code is at this link https://sites.google.com/view/d2rl/home.