SOTAVerified

Unsupervised Video Object Segmentation

The unsupervised scenario assumes that the user does not interact with the algorithm to obtain the segmentation masks. Methods should provide a set of object candidates with no overlapping pixels that span through the whole video sequence. This set of objects should contain at least the objects that capture human attention when watching the whole video sequence i.e objects that are more likely to be followed by human gaze.

Papers

Showing 2130 of 89 papers

TitleStatusHype
Adaptive Multi-source Predictor for Zero-shot Video Object SegmentationCode1
Reciprocal Transformations for Unsupervised Video Object SegmentationCode1
Guided Slot Attention for Unsupervised Video Object SegmentationCode1
Hierarchical Feature Alignment Network for Unsupervised Video Object SegmentationCode1
Autoencoder-based background reconstruction and foreground segmentation with background noise estimationCode1
Learning Motion and Temporal Cues for Unsupervised Video Object SegmentationCode1
Dual Prototype Attention for Unsupervised Video Object SegmentationCode1
In-N-Out Generative Learning for Dense Unsupervised Video SegmentationCode1
Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual GroupingCode1
UVOSAM: A Mask-free Paradigm for Unsupervised Video Object Segmentation via Segment Anything ModelCode1
Show:102550
← PrevPage 3 of 9Next →

No leaderboard results yet.