SOTAVerified

Enabling Visual Action Planning for Object Manipulation through Latent Space Roadmap

2021-03-03Code Available0· sign in to hype

Martina Lippi, Petra Poklukar, Michael C. Welle, Anastasia Varava, Hang Yin, Alessandro Marino, Danica Kragic

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a framework for visual action planning of complex manipulation tasks with high-dimensional state spaces, focusing on manipulation of deformable objects. We propose a Latent Space Roadmap (LSR) for task planning which is a graph-based structure globally capturing the system dynamics in a low-dimensional latent space. Our framework consists of three parts: (1) a Mapping Module (MM) that maps observations given in the form of images into a structured latent space extracting the respective states as well as generates observations from the latent states, (2) the LSR which builds and connects clusters containing similar states in order to find the latent plans between start and goal states extracted by MM, and (3) the Action Proposal Module that complements the latent plan found by the LSR with the corresponding actions. We present a thorough investigation of our framework on simulated box stacking and rope/box manipulation tasks, and a folding task executed on a real robot.

Tasks

Reproductions