Optimizing the Neural Architecture of Reinforcement Learning Agents
2020-11-30Code Available0· sign in to hype
N. Mazyavkina, S. Moustafa, I. Trofimov, E. Burnaev
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/NinaMaz/NAS_RL_torchOfficialIn paperpytorch★ 2
Abstract
Reinforcement learning (RL) enjoyed significant progress over the last years. One of the most important steps forward was the wide application of neural networks. However, architectures of these neural networks are typically constructed manually. In this work, we study recently proposed neural architecture search (NAS) methods for optimizing the architecture of RL agents. We carry out experiments on the Atari benchmark and conclude that modern NAS methods find architectures of RL agents outperforming a manually selected one.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Atari 2600 Breakout | SPOS | Score | 180.6 | — | Unverified |
| Atari 2600 Breakout | ENAS Search space 1 | Score | 161.1 | — | Unverified |
| Atari 2600 Breakout | SPOS Search space 1 | Score | 144.4 | — | Unverified |
| Atari 2600 Breakout | ENAS | Score | 91.4 | — | Unverified |
| Atari 2600 Freeway | ENAS | Score | 22 | — | Unverified |
| Atari 2600 Freeway | SPOS | Score | 22 | — | Unverified |