Smoothing Slot Attention Iterations and Recurrences
Rongzhen Zhao, Wenyan Yang, Juho Kannala, Joni Pajarinen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/genera1z/smoothsaOfficialIn paper★ 9
Abstract
Slot Attention (SA) and its variants lie at the heart of mainstream Object-Centric Learning (OCL). Objects in an image can be aggregated into respective slot vectors, by iteratively refining cold-start query vectors, typically three times, via SA on image features. For video, such aggregation is recurrently shared across frames, with queries cold-started on the first frame while transitioned from the previous frame's slots on non-first frames. However, the cold-start queries lack sample-specific cues thus hinder precise aggregation on the image or video's first frame; Also, non-first frames' queries are already sample-specific thus require transforms different from the first frame's aggregation. We address these issues for the first time with our SmoothSA: (1) To smooth SA iterations on the image or video's first frame, we preheat the cold-start queries with rich information of input features, via a tiny module self-distilled inside OCL; (2) To smooth SA recurrences across all video frames, we differentiate the homogeneous transforms on the first and non-first frames, by using full and single iterations respectively. Comprehensive experiments on object discovery, recognition and downstream benchmarks validate our method's effectiveness. Further analyses intuitively illuminate how our method smooths SA iterations and recurrences. Our source code, model checkpoints and training logs are available on https://github.com/Genera1Z/SmoothSA.