SOTAVerified

Towards An End-to-End Framework for Flow-Guided Video Inpainting

2022-04-06CVPR 2022Code Available3· sign in to hype

Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, Ming-Ming Cheng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Optical flow, which captures motion information across frames, is exploited in recent video inpainting methods through propagating pixels along its trajectories. However, the hand-crafted flow-based processes in these methods are applied separately to form the whole inpainting pipeline. Thus, these methods are less efficient and rely heavily on the intermediate results from earlier stages. In this paper, we propose an End-to-End framework for Flow-Guided Video Inpainting (E^2FGVI) through elaborately designed three trainable modules, namely, flow completion, feature propagation, and content hallucination modules. The three modules correspond with the three stages of previous flow-based methods but can be jointly optimized, leading to a more efficient and effective inpainting process. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods both qualitatively and quantitatively and shows promising efficiency. The code is available at https://github.com/MCG-NKU/E2FGVI.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
KITTI360-EXE2FGVIAverage PSNR19.45Unverified

Reproductions