SOTAVerified

View-LSTM: Novel-View Video Synthesis Through View Decomposition

2019-10-01ICCV 2019Unverified0· sign in to hype

Mohamed Ilyes Lakhal, Oswald Lanz, Andrea Cavallaro

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We tackle the problem of synthesizing a video of multiple moving people as seen from a novel view, given only an input video and depth information or human poses of the novel view as prior. This problem requires a model that learns to transform input features into target features while maintaining temporal consistency. To this end, we learn an invariant feature from the input video that is shared across all viewpoints of the same scene and a view-dependent feature obtained using the target priors. The proposed approach, View-LSTM, is a recurrent neural network structure that accounts for the temporal consistency and target feature approximation constraints. We validate View-LSTM by designing an end-to-end generator for novel-view video synthesis. Experiments on a large multi-view action recognition dataset validate the proposed model.

Tasks

Reproductions