Robust Self-Supervised Cross-Modal Super-Resolution against Real-World Misaligned Observations
Xiaoyu Dong, Jiahuan Li, Ziteng Cui, Naoto Yokoya
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Cross-modal super-resolution (SR) on real-world misaligned data is challenging, as only unlabeled low-resolution (LR) source and high-resolution (HR) guide images with complex spatial misalignment are available. Previous methods either rely on fully simulated training data or adopt suboptimal alignment strategies that overlook cross-modal dependencies, limiting their performance in practice. To address these issues, we propose RobSelf, a self-supervised model that jointly optimizes a misalignment-aware feature translator and a content-aware reference filter online. The translator resolves unsupervised cross-modal and cross-resolution alignment via weakly-supervised, misalignment-aware translation, yielding an aligned guide feature. Guided by this feature, the filter performs reference-based discriminative self-enhancement on the source, enabling SR prediction with high resolution and high fidelity. Experiments on synthesized data and our collected real-world data demonstrate that RobSelf achieves state-of-the-art performance, outperforming existing self-supervised and supervised methods. Moreover, it achieves superior efficiency, up to 15.3 faster than prior self-supervised methods.