SOTAVerified

Unsupervised Video Object Segmentation

The unsupervised scenario assumes that the user does not interact with the algorithm to obtain the segmentation masks. Methods should provide a set of object candidates with no overlapping pixels that span through the whole video sequence. This set of objects should contain at least the objects that capture human attention when watching the whole video sequence i.e objects that are more likely to be followed by human gaze.

Papers

Showing 1120 of 89 papers

TitleStatusHype
Treating Motion as Option to Reduce Motion Dependency in Unsupervised Video Object SegmentationCode1
Hierarchical Feature Alignment Network for Unsupervised Video Object SegmentationCode1
In-N-Out Generative Learning for Dense Unsupervised Video SegmentationCode1
Autoencoder-based background reconstruction and foreground segmentation with background noise estimationCode1
D2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in VideosCode1
D^2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in VideosCode1
Dense Unsupervised Learning for Video SegmentationCode1
Multi-Source Fusion and Automatic Predictor Selection for Zero-Shot Video Object SegmentationCode1
Full-Duplex Strategy for Video Object SegmentationCode1
Reciprocal Transformations for Unsupervised Video Object SegmentationCode1
Show:102550
← PrevPage 2 of 9Next →

No leaderboard results yet.