Through the Stealth Lens: Rethinking Attacks and Defenses in RAG
Sarthak Choudhary, Nils Palumbo, Ashish Hooda, Krishnamurthy Dj Dvijotham, Somesh Jha
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/sarthak-choudhary/stealthy_attacks_against_ragOfficialIn paperpytorch★ 6
Abstract
Retrieval-augmented generation (RAG) systems are vulnerable to attacks that inject poisoned passages into the retrieved set, even at low corruption rates. We show that existing attacks are not designed to be stealthy, allowing reliable detection and mitigation. We formalize stealth using a distinguishability-based security game. If a few poisoned passages are designed to control the response, they must differentiate themselves from benign ones, inherently compromising stealth. This motivates the need for attackers to rigorously analyze intermediate signals involved in generationx2014such as attention patterns or next-token probability distributionsx2014to avoid easily detectable traces of manipulation. Leveraging attention patterns, we propose a passage-level scorex2014the Normalized Passage Attention Scorex2014used by our Attention-Variance Filter algorithm to identify and filter potentially poisoned passages. This method mitigates existing attacks, improving accuracy by up to 20 \% over baseline defenses. To probe the limits of attention-based defenses, we craft stealthier adaptive attacks that obscure such traces, achieving up to 35 \% attack success rate, and highlight the challenges in improving stealth.