SOTAVerified

Instance Data Condensation for Image Super-Resolution

2026-03-06Unverified0· sign in to hype

Tianhao Peng, Ho Man Kwan, Yuxuan Jiang, Ge Gao, Fan Zhang, Xiaozhong Xu, Shan Liu, David Bull

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep learning based Image Super-Resolution (ISR) relies on large training datasets to optimize model generalization; this requires substantial computational and storage resources during training. While dataset condensation (DC) has shown potential in improving data efficiency for high-level computer vision tasks, adopting these methods for ISR is not straightforward due to the different requirements of ISR training, including the use of unlabeled datasets and high resolution images with fine details. In this paper, we propose a novel Instance Data Condensation (IDC) framework specifically for ISR, which achieves data condensation in a per-image manner, aiming to address the limitations when directly applying existing DC methods to the ISR task. Furthermore, the IDC framework is based on a novel Random Local Fourier Feature Extraction and Multi-level Feature Distribution Matching methods, which are designed to generate high-quality synthesized content by aligning its feature distributions with those of the original high-resolution training samples at both global and local levels. This framework has been utilized to condense the most commonly used training dataset for ISR, DIV2K, with a 10% condensation rate. The resulting synthetic dataset offers comparable performance to the original full dataset and excellent training stability when used to train various popular ISR models. To the best of our knowledge, this is the first time that a condensed/synthetic dataset (with a 10% data volume) has demonstrated such performance.

Reproductions