SOTAVerified

Activating More Pixels in Image Super-Resolution Transformer

2022-05-09CVPR 2023Code Available3· sign in to hype

Xiangyu Chen, Xintao Wang, Jiantao Zhou, Yu Qiao, Chao Dong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution. However, we find that these networks can only utilize a limited spatial range of input information through attribution analysis. This implies that the potential of Transformer is still not fully exploited in existing networks. In order to activate more input pixels for better reconstruction, we propose a novel Hybrid Attention Transformer (HAT). It combines both channel attention and window-based self-attention schemes, thus making use of their complementary advantages of being able to utilize global statistics and strong local fitting capability. Moreover, to better aggregate the cross-window information, we introduce an overlapping cross-attention module to enhance the interaction between neighboring window features. In the training stage, we additionally adopt a same-task pre-training strategy to exploit the potential of the model for further improvement. Extensive experiments show the effectiveness of the proposed modules, and we further scale up the model to demonstrate that the performance of this task can be greatly improved. Our overall method significantly outperforms the state-of-the-art methods by more than 1dB. Codes and models are available at https://github.com/XPixelGroup/HAT.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BSD100 - 2x upscalingHAT-LPSNR32.74Unverified
BSD100 - 2x upscalingHATPSNR32.69Unverified
BSD100 - 3x upscalingHATPSNR29.59Unverified
BSD100 - 3x upscalingHAT-LPSNR29.63Unverified
BSD100 - 4x upscalingHAT-LPSNR28.09Unverified
BSD100 - 4x upscalingHATPSNR28.05Unverified
Manga109 - 2x upscalingHAT-LPSNR41.01Unverified
Manga109 - 2x upscalingHATPSNR40.71Unverified
Manga109 - 3x upscalingHATPSNR35.84Unverified
Manga109 - 3x upscalingHAT-LPSNR36.02Unverified
Manga109 - 4x upscalingHAT-LSSIM0.93Unverified
Manga109 - 4x upscalingHATSSIM0.93Unverified
Set14 - 2x upscalingHAT-LPSNR35.29Unverified
Set14 - 2x upscalingHATPSNR35.13Unverified
Set14 - 3x upscalingHATPSNR31.33Unverified
Set14 - 3x upscalingHAT-LPSNR31.47Unverified
Set14 - 4x upscalingHAT-LPSNR29.47Unverified
Set14 - 4x upscalingHATPSNR29.38Unverified
Set5 - 2x upscalingHAT-LPSNR38.91Unverified
Set5 - 2x upscalingHATPSNR38.73Unverified
Set5 - 3x upscalingHATPSNR35.16Unverified
Set5 - 3x upscalingHAT-LPSNR35.28Unverified
Set5 - 4x upscalingHAT-LPSNR33.3Unverified
Urban100 - 2x upscalingHAT-LPSNR35.09Unverified
Urban100 - 2x upscalingHATPSNR34.81Unverified
Urban100 - 3x upscalingHAT-LPSNR30.92Unverified
Urban100 - 3x upscalingHATPSNR30.7Unverified
Urban100 - 4x upscalingHATPSNR28.37Unverified
Urban100 - 4x upscalingHAT-LPSNR28.6Unverified

Reproductions