SOTAVerified

Toward Task Generalization via Memory Augmentation in Meta-Reinforcement Learning

2025-02-03Unverified0· sign in to hype

Kaixi Bao, Chenhao Li, Yarden As, Andreas Krause, Marco Hutter

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Agents trained via reinforcement learning (RL) often struggle to perform well on tasks that differ from those encountered during training. This limitation presents a challenge to the broader deployment of RL in diverse and dynamic task settings. In this work, we introduce memory augmentation, a memory-based RL approach to improve task generalization. Our approach leverages task-structured augmentations to simulate plausible out-of-distribution scenarios and incorporates memory mechanisms to enable context-aware policy adaptation. Trained on a predefined set of tasks, our policy demonstrates the ability to generalize to unseen tasks through memory augmentation without requiring additional interactions with the environment. Through extensive simulation experiments and real-world hardware evaluations on legged locomotion tasks, we demonstrate that our approach achieves zero-shot generalization to unseen tasks while maintaining robust in-distribution performance and high sample efficiency.

Tasks

Reproductions