SOTAVerified

Joint 2D-3D-Semantic Data for Indoor Scene Understanding

2017-02-03Code Available1· sign in to hype

Iro Armeni, Sasha Sax, Amir R. Zamir, Silvio Savarese

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a dataset of large-scale indoor spaces that provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. The dataset covers over 6,000m2 and contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360 equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces. The dataset is available here: http://3Dsemantics.stanford.edu/

Tasks

Reproductions