SOTAVerified

Stochastic Video Generation with a Learned Prior

2018-02-21ICML 2018Code Available0· sign in to hype

Emily Denton, Rob Fergus

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generating video frames that accurately predict future world states is challenging. Existing approaches either fail to capture the full distribution of outcomes, or yield blurry generations, or both. In this paper we introduce an unsupervised video generation model that learns a prior model of uncertainty in a given environment. Video frames are generated by drawing samples from this prior and combining them with a deterministic estimate of the future frame. The approach is simple and easily trained end-to-end on a variety of datasets. Sample generations are both varied and sharp, even many frames into the future, and compare favorably to those from existing approaches.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BAIR Robot PushingSVG (from SRVP)Cond2Unverified
BAIR Robot PushingSVG-FP (from FVD)FVD score315.5Unverified
BAIR Robot PushingSVG-LP (from vRNN)FVD score256.62Unverified

Reproductions