SOTAVerified

Red Teaming

Papers

Showing 101150 of 251 papers

TitleStatusHype
Recent advancements in LLM Red-Teaming: Techniques, Defenses, and Ethical Considerations0
AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMsCode3
SteerDiff: Steering towards Safe Text-to-Image Diffusion Models0
Automated Red Teaming with GOAT: the Generative Offensive Agent Tester0
PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI SystemCode7
Overriding Safety protections of Open-source ModelsCode0
RED QUEEN: Safeguarding Large Language Models against Concealed Multi-Turn JailbreakingCode1
Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn InteractionCode1
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI0
Jailbreaking Large Language Models with Symbolic Mathematics0
What Is Wrong with My Model? Identifying Systematic Problems with Semantic Data SlicingCode0
Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols0
Exploring Straightforward Conversational Red-Teaming0
Conversational Complexity for Assessing Risk in Large Language Models0
Testing and Evaluation of Large Language Models: Correctness, Non-Toxicity, and Fairness0
LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks YetCode2
Advancing Adversarial Suffix Transfer Learning on Aligned Large Language ModelsCode0
Atoxia: Red-teaming Large Language Models with Target Toxic Answers0
Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring TechniqueCode1
DiffZOO: A Purely Query-Based Black-Box Attack for Red-teaming Text-to-Image Generative Model via Zeroth Order Optimization0
SAGE-RT: Synthetic Alignment data Generation for Safety Evaluation and Red Teaming0
Kov: Transferable and Naturalistic Black-Box LLM Attacks using Markov Decision Processes and Tree SearchCode0
h4rm3l: A language for Composable Jailbreak Attack Synthesis0
SEAS: Self-Evolving Adversarial Safety Optimization for Large Language ModelsCode1
Tamper-Resistant Safeguards for Open-Weight LLMsCode2
Can Large Language Models Automatically Jailbreak GPT-4V?0
RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent0
Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMsCode1
Breaking the Global North Stereotype: A Global South-centric Benchmark Dataset for Auditing and Mitigating Biases in Facial Recognition Systems0
Arondight: Red Teaming Large Vision Language Models with Auto-generated Multi-modal Jailbreak Prompts0
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)Code1
Phi-3 Safety Post-Training: Aligning Language Models with a "Break-Fix" Cycle0
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models0
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge BasesCode3
Reliable and Efficient Concept Erasure of Text-to-Image Diffusion ModelsCode2
ASTPrompter: Weakly Supervised Automated Language Model Red-Teaming to Identify Low-Perplexity Toxic PromptsCode0
The Human Factor in AI Red Teaming: Perspectives from Social and Collaborative Computing0
Automated Progressive Red TeamingCode0
SeqAR: Jailbreak LLMs with Sequential Auto-Generated CharactersCode0
Purple-teaming LLMs with Adversarial Defender Training0
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language ModelsCode2
The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm0
CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue CoreferenceCode1
Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations0
Steering Without Side Effects: Improving Post-Deployment Control of Language ModelsCode0
Adversaries Can Misuse Combinations of Safe Models0
Jailbreaking as a Reward Misspecification ProblemCode1
Finding Safety Neurons in Large Language Models0
Code-Switching Red-Teaming: LLM Evaluation for Safety and Multilingual UnderstandingCode0
Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn PlannerCode1
Show:102550
← PrevPage 3 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified