SOTAVerified

Unsupervised Video Object Segmentation

The unsupervised scenario assumes that the user does not interact with the algorithm to obtain the segmentation masks. Methods should provide a set of object candidates with no overlapping pixels that span through the whole video sequence. This set of objects should contain at least the objects that capture human attention when watching the whole video sequence i.e objects that are more likely to be followed by human gaze.

Papers

Showing 1120 of 89 papers

TitleStatusHype
Dense Unsupervised Learning for Video SegmentationCode1
Autoencoder-based background reconstruction and foreground segmentation with background noise estimationCode1
Dual Prototype Attention for Unsupervised Video Object SegmentationCode1
Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual GroupingCode1
Learning Video Object Segmentation from Unlabeled VideosCode1
Guided Slot Attention for Unsupervised Video Object SegmentationCode1
D^2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in VideosCode1
Full-Duplex Strategy for Video Object SegmentationCode1
D2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in VideosCode1
MAST: A Memory-Augmented Self-supervised TrackerCode1
Show:102550
← PrevPage 2 of 9Next →

No leaderboard results yet.