WaveMixSR-V2: Enhancing Super-resolution with Higher Efficiency
Pranav Jeevan, Neeraj Nixon, Amit Sethi
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/pranavphoenix/WaveMixSROfficialIn paperpytorch★ 52
Abstract
Recent advancements in single image super-resolution have been predominantly driven by token mixers and transformer architectures. WaveMixSR utilized the WaveMix architecture, employing a two-dimensional discrete wavelet transform for spatial token mixing, achieving superior performance in super-resolution tasks with remarkable resource efficiency. In this work, we present an enhanced version of the WaveMixSR architecture by (1) replacing the traditional transpose convolution layer with a pixel shuffle operation and (2) implementing a multistage design for higher resolution tasks (4). Our experiments demonstrate that our enhanced model -- WaveMixSR-V2 -- outperforms other architectures in multiple super-resolution tasks, achieving state-of-the-art for the BSD100 dataset, while also consuming fewer resources, exhibits higher parameter efficiency, lower latency and higher throughput. Our code is available at https://github.com/pranavphoenix/WaveMixSR.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| BSD100 - 2x upscaling | WaveMixSR-V2 | PSNR | 33.12 | — | Unverified |
| BSD100 - 4x upscaling | WaveMixSR-V2 | PSNR | 27.87 | — | Unverified |