SOTAVerified

Red Teaming

Papers

Showing 110 of 251 papers

TitleStatusHype
garak: A Framework for Security Probing Large Language ModelsCode9
PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI SystemCode7
Seamless: Multilingual Expressive and Streaming Speech TranslationCode6
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust RefusalCode4
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge BasesCode3
AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMsCode3
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons LearnedCode3
Curiosity-driven Red-teaming for Large Language ModelsCode2
ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red TeamingCode2
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!Code2
Show:102550
← PrevPage 1 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified