URVOS: Unified Referring Video Object Segmentation Network with a Large-Scale Benchmark
Seonguk Seo, Joon-Young Lee, Bohyung Han
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/skynbe/Refer-Youtube-VOSOfficialIn paperpytorch★ 28
Abstract
We propose a unified referring video object segmentation network (URVOS). URVOS takes a video and a referring expression as inputs, and estimates the object masks referred by the given language expression in the whole video frames. Our algorithm addresses the challenging problem by performing language-based object segmentation and mask propagation jointly using a single deep neural network with a proper combination of two attention models. In addition, we construct the first large-scale referring video object segmentation dataset called Refer-Youtube-VOS. We evaluate our model on two benchmark datasets including ours and demonstrate the effectiveness of the proposed approach. The dataset is released at https://github.com/skynbe/Refer-Youtube-VOS.