SOTAVerified

Accelerating Online Reinforcement Learning via Model-Based Meta-Learning

2021-03-15ICLR Workshop Learning_to_Learn 2021Unverified0· sign in to hype

John D Co-Reyes, Sarah Feng, Glen Berseth, Jie Qui, Sergey Levine

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Current reinforcement learning algorithms struggle to quickly adapt to new situations without large amounts of experience and usually without large amounts of optimization over that experience. In this work we seek to leverage meta-learning methods from MAML with model-based RL methods based on MuZero to design agents which can quickly adapt online. We propose a new model-based meta-RL algorithm that can adapt online to new experience and can be meta-trained without explicit task labels. Compared to prior model-based meta-learning methods, our work can scale to more visually complex image based environments with dynamics that change significantly over time and can handle the continual RL setting which has no episodic boundaries.

Tasks

Reproductions