SOTAVerified

ABounD: Adversarial Boundary-Driven Few-Shot Learning for Multi-Class Anomaly Detection

2026-03-14Unverified0· sign in to hype

Runzhi Deng, Yundi Hu, Xinshuang Zhang, Zhao Wang, Xixi Liu, Wang-Zhou Dai, Caifeng Shan, Fang Zhao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Few-shot multi-class industrial anomaly detection identifies diverse defects across multiple categories using a single unified model and limited normal samples. Although vision-language models offer strong generalization, modeling multiple distinct category manifolds concurrently without actual anomalous data causes feature space collapse and cross-class interference. Consequently, existing methods often fail to balance scalability and precision, leading to either isolated single-class retraining or excessively loose decision margins. To address this limitation, we present a one-for-all learning framework called ABounD that unites semantic concept anchoring with geometric boundary optimization. This method employs two lightweight mechanisms to resolve multi-class ambiguity. First, the Dynamic Concept Fusion module generates class-adaptive semantic anchors via query-aware hierarchical calibration, disentangling overlapping category concepts. Second, using these anchors, the Adversarial Boundary Forging module constructs a tight, class-tailored decision margin by synthesizing adversarial boundary-level fence features to prevent cross-class boundary blurring. Optimized in a single stage, ABounD removes the requirement for isolated per-category retraining in few-shot settings. Experiments on seven industrial benchmarks show that the proposed method achieves state-of-the-art detection and localization performance for multi-class few-shot anomaly detection while maintaining low computational costs during training and inference.

Reproductions