SOTAVerified

Red Teaming

Papers

Showing 101150 of 251 papers

TitleStatusHype
Offensive Security for AI Systems: Concepts, Practices, and Applications0
AgentVigil: Generic Black-Box Red-teaming for Indirect Prompt Injection against LLM Agents0
Safety by Measurement: A Systematic Literature Review of AI Safety Evaluation Methods0
DMRL: Data- and Model-aware Reward Learning for Data Extraction0
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs0
Red Teaming Large Language Models for Healthcare0
When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines0
SAGE: A Generic Framework for LLM Safety EvaluationCode0
Understanding and Mitigating Risks of Generative AI in Financial Services0
RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models0
ELAB: Extensive LLM Alignment Benchmark in Persian Language0
X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents0
The Structural Safety Generalization ProblemCode0
Multi-lingual Multi-turn Automated Red Teaming for LLMs0
Strategize Globally, Adapt Locally: A Multi-Turn Red Teaming Agent with Dual-Level Learning0
Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review0
AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration0
MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models0
Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization0
A Framework for Evaluating Emerging Cyberattack Capabilities of AI0
Red Teaming Contemporary AI Models: Insights from Spanish and Basque Perspectives0
JBFuzz: Jailbreaking LLMs Efficiently and Effectively Using Fuzzing0
Reinforced Diffuser for Red Teaming Large Vision-Language Models0
MAD-MAX: Modular And Diverse Malicious Attack MiXtures for Automated LLM Red Teaming0
Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges0
LLM-Safety Evaluations Lack Robustness0
Building Safe GenAI Applications: An End-to-End Overview of Red Teaming for Large Language Models0
Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming0
Fast Proxies for LLM Robustness Evaluation0
A Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management0
Predictive Red Teaming: Breaking Policies Without Breaking Robots0
KDA: A Knowledge-Distilled Attacker for Generating Diverse Prompts to Jailbreak LLMs0
Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming0
RICoTA: Red-teaming of In-the-wild Conversation with Test AttemptsCode0
Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models0
Text-Diffusion Red-Teaming of Large Language Models: Unveiling Harmful Behaviors with Proximity Constraints0
Lessons From Red Teaming 100 Generative AI Products0
Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency0
Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models0
Diverse and Effective Red Teaming with Auto-generated Rewards and Multi-step Reinforcement Learning0
OpenAI o1 System Card0
POEX: Understanding and Mitigating Policy Executable Jailbreak Attacks against Embodied AI0
AI red-teaming is a sociotechnical challenge: on values, labor, and harms0
Look Before You Leap: Enhancing Attention and Vigilance Regarding Harmful Content with GuidelineLLMCode0
Embodied Red Teaming for Auditing Robotic Foundation Models0
In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models0
LLMStinger: Jailbreaking LLMs using RL fine-tuned LLMs0
Desert Camels and Oil Sheikhs: Arab-Centric Red Teaming of Frontier LLMs0
Audio Is the Achilles' Heel: Red Teaming Audio Large Multimodal ModelsCode0
An Auditing Test To Detect Behavioral Shift in Language ModelsCode0
Show:102550
← PrevPage 3 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified