MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation
Roy Miles, Mehmet Kerim Yucel, Bruno Manganelli, Albert Saa-Garriga
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper tackles the problem of semi-supervised video object segmentation on resource-constrained devices, such as mobile phones. We formulate this problem as a distillation task, whereby we demonstrate that small space-time-memory networks with finite memory can achieve competitive results with state of the art, but at a fraction of the computational cost (32 milliseconds per frame on a Samsung Galaxy S22). Specifically, we provide a theoretically grounded framework that unifies knowledge distillation with supervised contrastive representation learning. These models are able to jointly benefit from both pixel-wise contrastive learning and distillation from a pre-trained teacher. We validate this loss by achieving competitive J&F to state of the art on both the standard DAVIS and YouTube benchmarks, despite running up to 5x faster, and with 32x fewer parameters.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| DAVIS 2016 | MobileVOS (BL30K) | J&F | 91.4 | — | Unverified |
| DAVIS 2016 | MobileVOS | J&F | 90.6 | — | Unverified |
| DAVIS 2017 (val) | MobileVOS (BL30K) | J&F | 82.3 | — | Unverified |
| DAVIS 2017 (val) | MobileVOS | J&F | 80.2 | — | Unverified |