SOTAVerified

Red Teaming

Papers

Showing 51100 of 251 papers

TitleStatusHype
Trajectory Balance with Asynchrony: Decoupling Exploration and Learning for Fast, Scalable LLM Post-TrainingCode1
AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration0
MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models0
A Framework for Evaluating Emerging Cyberattack Capabilities of AI0
Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization0
Red Teaming Contemporary AI Models: Insights from Spanish and Basque Perspectives0
JBFuzz: Jailbreaking LLMs Efficiently and Effectively Using Fuzzing0
MAD-MAX: Modular And Diverse Malicious Attack MiXtures for Automated LLM Red Teaming0
Reinforced Diffuser for Red Teaming Large Vision-Language Models0
Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges0
LLM-Safety Evaluations Lack Robustness0
Building Safe GenAI Applications: An End-to-End Overview of Red Teaming for Large Language Models0
UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own ReasoningCode1
Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming0
Fast Proxies for LLM Robustness Evaluation0
A Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management0
Predictive Red Teaming: Breaking Policies Without Breaking Robots0
KDA: A Knowledge-Distilled Attacker for Generating Diverse Prompts to Jailbreak LLMs0
Understanding and Enhancing the Transferability of Jailbreaking AttacksCode1
Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming0
RICoTA: Red-teaming of In-the-wild Conversation with Test AttemptsCode0
Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail ModerationCode2
Siren: A Learning-Based Multi-Turn Attack Framework for Simulating Real-World Human Jailbreak BehaviorsCode1
Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models0
Text-Diffusion Red-Teaming of Large Language Models: Unveiling Harmful Behaviors with Proximity Constraints0
Gandalf the Red: Adaptive Security for LLMsCode1
Lessons From Red Teaming 100 Generative AI Products0
Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency0
Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models0
Diverse and Effective Red Teaming with Auto-generated Rewards and Multi-step Reinforcement Learning0
OpenAI o1 System Card0
POEX: Understanding and Mitigating Policy Executable Jailbreak Attacks against Embodied AI0
AI red-teaming is a sociotechnical challenge: on values, labor, and harms0
Look Before You Leap: Enhancing Attention and Vigilance Regarding Harmful Content with GuidelineLLMCode0
PrivAgent: Agentic-based Red-teaming for LLM Privacy LeakageCode1
Embodied Red Teaming for Auditing Robotic Foundation Models0
In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models0
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMsCode1
LLMStinger: Jailbreaking LLMs using RL fine-tuned LLMs0
Audio Is the Achilles' Heel: Red Teaming Audio Large Multimodal ModelsCode0
Desert Camels and Oil Sheikhs: Arab-Centric Red Teaming of Frontier LLMs0
An Auditing Test To Detect Behavioral Shift in Language ModelsCode0
AdvAgent: Controllable Blackbox Red-teaming on Web Agents0
LLM-Assisted Red Teaming of Diffusion Models through "Failures Are Fated, But Can Be Faded"0
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis0
SMILES-Prompting: A Novel Approach to LLM Jailbreak Attacks in Chemical SynthesisCode0
BiasJailbreak:Analyzing Ethical Biases and Jailbreak Vulnerabilities in Large Language ModelsCode0
A Formal Framework for Assessing and Mitigating Emergent Security Risks in Generative AI Models: Bridging Theory and Dynamic Risk Mitigation0
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment0
Refusal-Trained LLMs Are Easily Jailbroken As Browser AgentsCode1
Show:102550
← PrevPage 2 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified