Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement
Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Timofte, Yulun Zhang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/caiyuanhao1998/retinexformerOfficialIn paperpytorch★ 1,420
- github.com/cmhungsteve/Awesome-Transformer-Attentionpytorch★ 5,024
- github.com/DmitryRyumin/ICCV-2023-Papersnone★ 968
- github.com/lcybuzz/Low-Level-Vision-Paper-Recordtf★ 547
- github.com/dawnlh/awesome-low-light-image-enhancementpytorch★ 31
Abstract
When enhancing low-light images, many deep learning algorithms are based on the Retinex theory. However, the Retinex model does not consider the corruptions hidden in the dark or introduced by the light-up process. Besides, these methods usually require a tedious multi-stage training pipeline and rely on convolutional neural networks, showing limitations in capturing long-range dependencies. In this paper, we formulate a simple yet principled One-stage Retinex-based Framework (ORF). ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image. We design an Illumination-Guided Transformer (IGT) that utilizes illumination representations to direct the modeling of non-local interactions of regions with different lighting conditions. By plugging IGT into ORF, we obtain our algorithm, Retinexformer. Comprehensive quantitative and qualitative experiments demonstrate that our Retinexformer significantly outperforms state-of-the-art methods on thirteen benchmarks. The user study and application on low-light object detection also reveal the latent practical values of our method. Code, models, and results are available at https://github.com/caiyuanhao1998/Retinexformer
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MIT-Adobe 5k | Retinexformer | PSNR on sRGB | 24.94 | — | Unverified |