SOTAVerified

Red Teaming

Papers

Showing 150 of 251 papers

TitleStatusHype
garak: A Framework for Security Probing Large Language ModelsCode9
PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI SystemCode7
Seamless: Multilingual Expressive and Streaming Speech TranslationCode6
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust RefusalCode4
AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMsCode3
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge BasesCode3
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons LearnedCode3
Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail ModerationCode2
LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks YetCode2
Tamper-Resistant Safeguards for Open-Weight LLMsCode2
Reliable and Efficient Concept Erasure of Text-to-Image Diffusion ModelsCode2
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language ModelsCode2
Jailbreak Vision Language Models via Bi-Modal Adversarial PromptCode2
Improved Techniques for Optimization-Based Jailbreaking on Large Language ModelsCode2
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMsCode2
ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red TeamingCode2
Against The Achilles' Heel: A Survey on Red Teaming for Generative ModelsCode2
Curiosity-driven Red-teaming for Large Language ModelsCode2
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially FastCode2
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!Code2
GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak PromptsCode2
GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via CipherCode2
RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS EnvironmentsCode1
MTSA: Multi-turn Safety Alignment for LLMs through Multi-round Red-teamingCode1
OET: Optimization-based prompt injection Evaluation ToolkitCode1
RainbowPlus: Enhancing Adversarial Prompt Generation via Evolutionary Quality-Diversity SearchCode1
sudo rm -rf agentic_securityCode1
Trajectory Balance with Asynchrony: Decoupling Exploration and Learning for Fast, Scalable LLM Post-TrainingCode1
UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own ReasoningCode1
Understanding and Enhancing the Transferability of Jailbreaking AttacksCode1
Siren: A Learning-Based Multi-Turn Attack Framework for Simulating Real-World Human Jailbreak BehaviorsCode1
Gandalf the Red: Adaptive Security for LLMsCode1
PrivAgent: Agentic-based Red-teaming for LLM Privacy LeakageCode1
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMsCode1
Refusal-Trained LLMs Are Easily Jailbroken As Browser AgentsCode1
RED QUEEN: Safeguarding Large Language Models against Concealed Multi-Turn JailbreakingCode1
Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn InteractionCode1
Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring TechniqueCode1
SEAS: Self-Evolving Adversarial Safety Optimization for Large Language ModelsCode1
Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMsCode1
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)Code1
CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue CoreferenceCode1
Jailbreaking as a Reward Misspecification ProblemCode1
Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn PlannerCode1
"Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' JailbreakCode1
MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language ModelsCode1
Unelicitable Backdoors in Language Models via Cryptographic Transformer CircuitsCode1
DiveR-CT: Diversity-enhanced Red Teaming Large Language Model Assistants with Relaxing ConstraintsCode1
Learning diverse attacks on large language models for robust red-teaming and safety tuningCode1
ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign UsersCode1
Show:102550
← PrevPage 1 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified