SOTAVerified

Meta-Controller: Few-Shot Imitation of Unseen Embodiments and Tasks in Continuous Control

2024-12-10Code Available0· sign in to hype

Seongwoong Cho, Donggyun Kim, Jinwoo Lee, Seunghoon Hong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generalizing across robot embodiments and tasks is crucial for adaptive robotic systems. Modular policy learning approaches adapt to new embodiments but are limited to specific tasks, while few-shot imitation learning (IL) approaches often focus on a single embodiment. In this paper, we introduce a few-shot behavior cloning framework to simultaneously generalize to unseen embodiments and tasks using a few (e.g., five) reward-free demonstrations. Our framework leverages a joint-level input-output representation to unify the state and action spaces of heterogeneous embodiments and employs a novel structure-motion state encoder that is parameterized to capture both shared knowledge across all embodiments and embodiment-specific knowledge. A matching-based policy network then predicts actions from a few demonstrations, producing an adaptive policy that is robust to over-fitting. Evaluated in the DeepMind Control suite, our framework termed demonstrates superior few-shot generalization to unseen embodiments and tasks over modular policy learning and few-shot IL approaches. Codes are available at https://github.com/SeongwoongCho/meta-controller.

Tasks

Reproductions