DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion
Chen Wang, Danfei Xu, Yuke Zhu, Roberto Martín-Martín, Cewu Lu, Li Fei-Fei, Silvio Savarese
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/cxt98/Densefusion-transparencypytorch★ 0
- github.com/shayeree96/Adversial-Attacks-on-densely-fused-point-clouds-for-6D-Pose-Estimationpytorch★ 0
- github.com/Yotonctu/densefusion_torch1.0pytorch★ 0
- github.com/j96w/DenseFusionpytorch★ 0
- github.com/Theopetitjean/DenseFusion_R_Invariantpytorch★ 0
- github.com/RiplleYang/DenseFusionpytorch★ 0
- github.com/caoquan95/6D-pose-projectpytorch★ 0
- github.com/hz-ants/DenseFusionpytorch★ 0
Abstract
A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| DTTD-Mobile | DenseFusion | ADD AUC | 69.67 | — | Unverified |