SOTAVerified

Visual Words Meet BM25: Sparse Auto-Encoder Visual Word Scoring for Image Retrieval

2026-03-06Unverified0· sign in to hype

Donghoon Han, Eunhwan Park, Seunghyeon Seo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Dense image retrieval is accurate but offers limited interpretability and attribution, and it can be compute-intensive at scale. We present BM25-V, which applies Okapi BM25 scoring to sparse visual-word activations from a Sparse Auto-Encoder (SAE) on Vision Transformer patch features. Across a large gallery, visual-word document frequencies are highly imbalanced and follow a Zipfian-like distribution, making BM25's inverse document frequency (IDF) weighting well suited for suppressing ubiquitous, low-information words and emphasizing rare, discriminative ones. BM25-V retrieves high-recall candidates via sparse inverted-index operations and serves as an efficient first-stage retriever for dense reranking. Across seven benchmarks, BM25-V achieves Recall@200 0.993, enabling a two-stage pipeline that reranks only K=200 candidates per query and recovers near-dense accuracy within 0.2\% on average. An SAE trained once on ImageNet-1K transfers zero-shot to seven fine-grained benchmarks without fine-tuning, and BM25-V retrieval decisions are attributable to specific visual words with quantified IDF contributions.

Reproductions