From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality
Zhenqiang Ying, Haoran Niu, Praful Gupta, Dhruv Mahajan, Deepti Ghadiyaram, Alan Bovik
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/baidut/PaQ-2-PiQpytorch★ 0
- github.com/fastiqa/fastiqapytorch★ 0
Abstract
Blind or no-reference (NR) perceptual picture quality prediction is a difficult, unsolved problem of great consequence to the social and streaming media industries that impacts billions of viewers daily. Unfortunately, popular NR prediction models perform poorly on real-world distorted pictures. To advance progress on this problem, we introduce the largest (by far) subjective picture quality database, containing about 40000 real-world distorted pictures and 120000 patches, on which we collected about 4M human judgments of picture quality. Using these picture and patch quality labels, we built deep region-based architectures that learn to produce state-of-the-art global picture quality predictions as well as useful local picture quality maps. Our innovations include picture quality prediction architectures that produce global-to-local inferences as well as local-to-global inferences (via feedback).
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MSU NR VQA Database | PaQ-2-PiQ | SRCC | 0.87 | — | Unverified |
| MSU SR-QA Dataset | PaQ-2-PiQ | SROCC | 0.71 | — | Unverified |