SOTAVerified

ZS-VCOS: Zero-Shot Outperforms Supervised Video Camouflaged Object Segmentation with Zero-Shot Method

2025-03-30Unpublished 2025Code Available0· sign in to hype

Wenqi Guo, Shan Du

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Camouflaged object segmentation presents unique challenges compared to traditional segmentation tasks, primarily due to the high similarity in patterns and colors between camouflaged objects and their backgrounds. Effective solutions to this problem have significant implications in critical areas such as pest control, defect detection, and lesion segmentation in medical imaging. Prior research has predominantly emphasized supervised or unsupervised pre-training methods, leaving zero-shot approaches significantly underdeveloped. Existing zero-shot techniques commonly utilize the Segment Anything Model (SAM) in automatic mode or rely on vision-language models to generate cues for segmentation; however, their performances remain unsatisfactory. Optical flow, commonly utilized for detecting moving objects, has demonstrated effectiveness even with camouflaged entities. Our method integrates optical flow, a vision-language model, and SAM 2 into a sequential pipeline, where the output of one component provides cues for the next. Evaluated on the MoCA-Masks dataset, our approach achieves outstanding performance improvements, significantly outperforming existing zero-shot methods by raising the mean Intersection-over-Union (mIoU) from 0.273 to 0.561. Remarkably, this simple yet effective approach also surpasses supervised methods, increasing mIoU from 0.422 to 0.561. Additionally, evaluation on the MoCA-Filter dataset demonstrates an increase in the success rate from 0.628 to 0.697 when compared with FlowSAM, a supervised transfer method. A thorough ablation study further validates the individual contributions of each component.

Tasks

Reproductions