Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers
Stephan R. Richter, Stefan Roth
Code Available — Be the first to reproduce this paper.
ReproduceCode
- bitbucket.org/visinf/projects-2018-matryoshkaOfficialpytorch★ 0
- github.com/JeremyFisher/few_shot_3drpytorch★ 14
- github.com/JeremyFisher/deep_level_setspytorch★ 0
Abstract
In this paper, we develop novel, efficient 2D encodings for 3D geometry, which enable reconstructing full 3D shapes from a single image at high resolution. The key idea is to pose 3D shape reconstruction as a 2D prediction problem. To that end, we first develop a simple baseline network that predicts entire voxel tubes at each pixel of a reference view. By leveraging well-proven architectures for 2D pixel-prediction tasks, we attain state-of-the-art results, clearly outperforming purely voxel-based approaches. We scale this baseline to higher resolutions by proposing a memory-efficient shape encoding, which recursively decomposes a 3D shape into nested shape layers, similar to the pieces of a Matryoshka doll. This allows reconstructing highly detailed shapes with complex topology, as demonstrated in extensive experiments; we clearly outperform previous octree-based approaches despite having a much simpler architecture using standard network components. Our Matryoshka networks further enable reconstructing shapes from IDs or shape similarity, as well as shape sampling.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Data3D−R2N2 | Matryoshka Networks | 3DIoU | 0.64 | — | Unverified |