SOTAVerified

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

2018-01-11CVPR 2018Code Available3· sign in to hype

Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, Oliver Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MSU FR VQA DatabaseLPIPSSRCC0.75Unverified
MSU SR-QA DatasetLPIPS (Alex)SROCC0.54Unverified
MSU SR-QA DatasetLPIPS (VGG)SROCC0.53Unverified

Reproductions