SOTAVerified

VLMDiff: Leveraging Vision-Language Models for Multi-Class Anomaly Detection with Diffusion

2025-11-11Code Available0· sign in to hype

Samet Hicsonmez, Abd El Rahman Shabayek, Djamila Aouada

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Detecting visual anomalies in diverse, multi-class real-world images is a significant challenge. We introduce , a novel unsupervised multi-class visual anomaly detection framework. It integrates a Latent Diffusion Model (LDM) with a Vision-Language Model (VLM) for enhanced anomaly localization and detection. Specifically, a pre-trained VLM with a simple prompt extracts detailed image descriptions, serving as additional conditioning for LDM training. Current diffusion-based methods rely on synthetic noise generation, limiting their generalization and requiring per-class model training, which hinders scalability. , however, leverages VLMs to obtain normal captions without manual annotations or additional training. These descriptions condition the diffusion model, learning a robust normal image feature representation for multi-class anomaly detection. Our method achieves competitive performance, improving the pixel-level Per-Region-Overlap (PRO) metric by up to 25 points on the Real-IAD dataset and 8 points on the COCO-AD dataset, outperforming state-of-the-art diffusion-based approaches. Code is available at https://github.com/giddyyupp/VLMDiff.

Reproductions