SOTAVerified

Learning Neural Implicit Functions as Object Representations for Robotic Manipulation

2021-09-29Unverified0· sign in to hype

Jung-Su Ha, Danny Driess, Marc Toussaint

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Robotic manipulation planning is the problem of finding a sequence of robot configurations that involves interactions with objects in the scene, e.g., grasp, placement, tool-use, etc. To achieve such interactions, traditional approaches require hand-designed features and object representations, and it still remains an open question how to describe such interactions with arbitrary objects in a flexible and efficient way. Inspired by neural implicit representations in 3D modeling, e.g. NeRF, we propose a method to represent objects as neural implicit functions upon which we can define and jointly train interaction features. The proposed pixel-aligned representation is directly inferred from camera images with known camera geometry, naturally acting as a perception component in the whole manipulation pipeline, while at the same time enabling sequential robot manipulation planning.

Tasks

Reproductions