SOTAVerified

Explainable Visual Anomaly Detection via Concept Bottleneck Models

2026-03-13Unverified0· sign in to hype

Arianna Stropeni, Valentina Zaccaria, Francesco Borsatti, Davide Dalle Pezze, Manuel Barusco, Gian Antonio Susto

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In recent years, Visual Anomaly Detection (VAD) has gained significant attention due to its ability to identify defects using only normal images during training. Many VAD models work without supervision but are still able to provide visual explanations by highlighting the anomalous regions within an image. However, although these visual explanations can be helpful, they lack a direct and semantically meaningful interpretation for users. To address this limitation, we propose extending Concept Bottleneck Models (CBMs) to the VAD setting. By learning meaningful concepts, the network can provide human-interpretable descriptions of anomalies, offering a novel and more insightful way to explain them. Our main contributions are threefold: (i) we introduce a concept-based framework for anomaly explanation by extending CBMs to the VAD setting for the first time; (ii) we evaluate multiple supervision regimes, ranging from fully-supervised to synthetic-only anomaly settings, analyzing the trade-off between performance and labeling effort; (iii) we propose a dual-branch architecture that combines a CBM branch for concept-level explanations with a visual branch for pixel-level anomaly localization, bridging semantic and spatial interpretability. When evaluated across three well-established VAD benchmarks, our approach, Concept-Aware Visual Anomaly Detection (CONVAD), achieves performance comparable to classic VAD methods, while providing richer, concept-driven explanations that enhance interpretability and trust in VAD systems.

Reproductions