SOTAVerified

Towards Lossless Token Pruning in Late-Interaction Retrieval Models

2025-04-17Code Available1· sign in to hype

Yuxuan Zong, Benjamin Piwowarski

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Late interaction neural IR models like ColBERT offer a competitive effectiveness-efficiency trade-off across many benchmarks. However, they require a huge memory space to store the contextual representation for all the document tokens. Some works have proposed using either heuristics or statistical-based techniques to prune tokens from each document. This however doesn't guarantee that the removed tokens have no impact on the retrieval score. Our work uses a principled approach to define how to prune tokens without impacting the score between a document and a query. We introduce three regularization losses, that induce a solution with high pruning ratios, as well as two pruning strategies. We study them experimentally (in and out-domain), showing that we can preserve ColBERT's performance while using only 30\% of the tokens.

Tasks

Reproductions