SOTAVerified

Geometry-Free View Synthesis: Transformers and no 3D Priors

2021-04-15ICCV 2021Code Available1· sign in to hype

Robin Rombach, Patrick Esser, Björn Ommer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Is a geometric model required to synthesize novel views from a single image? Being bound to local convolutions, CNNs need explicit 3D biases to model geometric transformations. In contrast, we demonstrate that a transformer-based model can synthesize entirely novel views without any hand-engineered 3D biases. This is achieved by (i) a global attention mechanism for implicitly learning long-range 3D correspondences between source and target views, and (ii) a probabilistic formulation necessary to capture the ambiguity inherent in predicting novel views from a single image, thereby overcoming the limitations of previous approaches that are restricted to relatively small viewpoint changes. We evaluate various ways to integrate 3D priors into a transformer architecture. However, our experiments show that no such geometric priors are required and that the transformer is capable of implicitly learning 3D relationships between images. Furthermore, this approach outperforms the state of the art in terms of visual quality while covering the full distribution of possible realizations. Code is available at https://git.io/JOnwn

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ACIDimpl.-nodepthFID42.88Unverified
ACIDhybridNLL5.34Unverified
ACIDimpl.-catdepthSSIM0.42Unverified
RealEstate10KHybridFID48.84Unverified
RealEstate10KImpl.-depthNLL4.84Unverified

Reproductions