SOTAVerified

Intriguing properties of generative classifiers

2023-09-28Code Available1· sign in to hype

Priyank Jaini, Kevin Clark, Robert Geirhos

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

What is the best paradigm to recognize objects -- discriminative inference (fast but potentially prone to shortcut learning) or using a generative model (slow but potentially more robust)? We build on recent advances in generative modeling that turn text-to-image models into classifiers. This allows us to study their behavior and to compare them against discriminative models and human psychophysical data. We report four intriguing emergent properties of generative classifiers: they show a record-breaking human-like shape bias (99% for Imagen), near human-level out-of-distribution accuracy, state-of-the-art alignment with human classification errors, and they understand certain perceptual illusions. Our results indicate that while the current dominant paradigm for modeling human object recognition is discriminative inference, zero-shot generative models approximate human object recognition data surprisingly well.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
shape biasImagenshape bias98.7Unverified
shape biasStable Diffusionshape bias92.7Unverified
shape biasPartishape bias91.7Unverified

Reproductions