SOTAVerified

Equivariant Diffusion Policy

2024-07-01Code Available2· sign in to hype

Dian Wang, Stephen Hart, David Surovik, Tarik Kelestemur, Haojie Huang, Haibo Zhao, Mark Yeatman, Jiuguang Wang, Robin Walters, Robert Platt

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent work has shown diffusion models are an effective approach to learning the multimodal distributions arising from demonstration data in behavior cloning. However, a drawback of this approach is the need to learn a denoising function, which is significantly more complex than learning an explicit policy. In this work, we propose Equivariant Diffusion Policy, a novel diffusion policy learning method that leverages domain symmetries to obtain better sample efficiency and generalization in the denoising function. We theoretically analyze the SO(2) symmetry of full 6-DoF control and characterize when a diffusion model is SO(2)-equivariant. We furthermore evaluate the method empirically on a set of 12 simulation tasks in MimicGen, and show that it obtains a success rate that is, on average, 21.9% higher than the baseline Diffusion Policy. We also evaluate the method on a real-world system to show that effective policies can be learned with relatively few training samples, whereas the baseline Diffusion Policy cannot.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MimicGenEquiDiff (Voxel)Succ. Rate (12 tasks, 100 demo/task)63.9Unverified
MimicGenEquiDiff (Image)Succ. Rate (12 tasks, 100 demo/task)53.7Unverified

Reproductions