RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation
Zhewei Huang, Tianyuan Zhang, Wen Heng, Boxin Shi, Shuchang Zhou
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/hzwer/Arxiv2020-RIFEOfficialIn paperpytorch★ 5,357
- github.com/hzwer/eccv2022-rifepytorch★ 5,357
- github.com/megvii-research/eccv2022-rifepytorch★ 5,357
- github.com/YiWeiHuang-stack/Squirrel-RIFEpytorch★ 3,487
- github.com/kritiksoman/GIMP-MLpytorch★ 1,542
- github.com/nihui/rife-ncnn-vulkanpytorch★ 1,012
- github.com/hzwer/Practical-RIFEpytorch★ 910
- github.com/hzwer/Pratical-RIFEpytorch★ 910
- github.com/media2x/rife-ncnn-vulkan-pythonnone★ 38
- github.com/ArchieMeng/rife-ncnn-vulkan-pythonnone★ 38
Abstract
We propose RIFE, a Real-time Intermediate Flow Estimation algorithm for Video Frame Interpolation (VFI). Many recent flow-based VFI methods first estimate the bi-directional optical flows, then scale and reverse them to approximate intermediate flows, leading to artifacts on motion boundaries and complex pipelines. RIFE uses a neural network named IFNet that can directly estimate the intermediate flows from coarse-to-fine with much better speed. We design a privileged distillation scheme for training IFNet, resulting in a large performance improvement. RIFE does not rely on pre-trained optical flow models and can support arbitrary-timestep frame interpolation with the temporal encoding input. Experiments demonstrate that RIFE achieves state-of-the-art performance on several public benchmarks. Compared with the popular SuperSlomo and DAIN methods, RIFE is 4--27 times faster and produces better results. The code is available at https://github.com/hzwer/arXiv2020-RIFE.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MSU Video Frame Interpolation | RIFE | Subjective score | 1.99 | — | Unverified |