Some Considerations on Learning to Explore via Meta-Reinforcement Learning
2018-03-03ICLR 2018Code Available0· sign in to hype
Bradly C. Stadie, Ge Yang, Rein Houthooft, Xi Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, Ilya Sutskever
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/episodeyang/e-mamlOfficialIn papertf★ 0
- github.com/Zhiwei-Z/PrompLimitTesttf★ 0
- github.com/clrrrr/promp_plustf★ 0
- github.com/jonasrothfuss/promptf★ 0
- github.com/mazpie/mimepytorch★ 0
- github.com/Zhiwei-Z/SeqPromptf★ 0
- github.com/Zhiwei-Z/prompzzwtf★ 0
Abstract
We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and E-RL^2. Results are presented on a novel environment we call `Krazy World' and a set of maze environments. We show E-MAML and E-RL^2 deliver better performance on tasks where exploration is important.