SOTAVerified

Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs

2026-03-25Code Available0· sign in to hype

Alexander Panfilov, Peter Romov, Igor Shilov, Yves-Alexandre de Montjoye, Jonas Geiping, Maksym Andriushchenko

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

LLM agents like Claude Code can not only write code but also be used for autonomous AI research and engineering rank2026posttrainbench, novikov2025alphaevolve. We show that an autoresearch-style pipeline karpathy2026autoresearch powered by Claude Code discovers novel white-box adversarial attack algorithms that significantly outperform all existing (30+) methods in jailbreaking and prompt injection evaluations. Starting from existing attack implementations, such as GCG~zou2023universal, the agent iterates to produce new algorithms achieving up to 40\% attack success rate on CBRN queries against GPT-OSS-Safeguard-20B, compared to 10\% for existing algorithms (fig:teaser, left). The discovered algorithms generalize: attacks optimized on surrogate models transfer directly to held-out models, achieving 100\% ASR against Meta-SecAlign-70B chen2025secalign versus 56\% for the best baseline (fig:teaser, middle). Extending the findings of~carlini2025autoadvexbench, our results are an early demonstration that incremental safety and security research can be automated using LLM agents. White-box adversarial red-teaming is particularly well-suited for this: existing methods provide strong starting points, and the optimization objective yields dense, quantitative feedback. We release all discovered attacks alongside baseline implementations and evaluation code at https://github.com/romovpa/claudini.

Reproductions