SOTAVerified

MSSSeg: Learning Multi-Scale Structural Complexity for Self-Supervised Segmentation

2026-03-14Unverified0· sign in to hype

Haotang Li, Zhenyu Qi, Hao Qin, Huanrui Yang, Kebin Peng, Qing Guo, Sen He

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Self-supervised semantic segmentation methods often suffer from structural errors, including merging distinct objects or fragmenting coherent regions, because they rely primarily on low-level appearance cues such as color and texture. These cues lack structural discriminability: they carry no information about the structural organization of a region, making it difficult to distinguish boundaries between similar-looking objects or maintain coherence within internally varying regions. Recent approaches attempt to address this by incorporating depth priors, yet remain limited by not explicitly modeling structural complexity that persists even when appearance cues are ambiguous. To bridge this gap, we present MSSSeg, a framework that explicitly learns multi-scale structural complexity from both semantic and depth domains, via three coupled components: (1) a Differentiable Box-Counting (DBC) module that captures and aligns multi-scale structural complexity features with semantic features; (2) a Learnable Structural Augmentation (StructAug) that corrupts pixel-intensity patterns, forcing the network to rely on structural complexity features from DBC; and (3) a Persistent Homology Loss (PHLoss) that directly supervises the structural complexity of predicted segmentations. Extensive experiments demonstrate that MSSSeg achieves new state-of-the-art performance on COCO-Stuff-27, Cityscapes, and Potsdam without excessive computational overhead, validating that explicit structural complexity learning is crucial for self-supervised segmentation.

Reproductions