| Trajectory Balance with Asynchrony: Decoupling Exploration and Learning for Fast, Scalable LLM Post-Training | Mar 24, 2025 | DiversityLarge Language Model | CodeCode Available | 1 |
| AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration | Mar 20, 2025 | Red Teaming | —Unverified | 0 |
| MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models | Mar 19, 2025 | Adversarial RobustnessAutonomous Driving | —Unverified | 0 |
| A Framework for Evaluating Emerging Cyberattack Capabilities of AI | Mar 14, 2025 | Red Teaming | —Unverified | 0 |
| Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization | Mar 14, 2025 | Red Teaming | —Unverified | 0 |
| Red Teaming Contemporary AI Models: Insights from Spanish and Basque Perspectives | Mar 13, 2025 | Red Teaming | —Unverified | 0 |
| JBFuzz: Jailbreaking LLMs Efficiently and Effectively Using Fuzzing | Mar 12, 2025 | Red TeamingSafety Alignment | —Unverified | 0 |
| MAD-MAX: Modular And Diverse Malicious Attack MiXtures for Automated LLM Red Teaming | Mar 8, 2025 | Red Teaming | —Unverified | 0 |
| Reinforced Diffuser for Red Teaming Large Vision-Language Models | Mar 8, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges | Mar 6, 2025 | BenchmarkingLanguage Modeling | —Unverified | 0 |
| LLM-Safety Evaluations Lack Robustness | Mar 4, 2025 | Red TeamingResponse Generation | —Unverified | 0 |
| Building Safe GenAI Applications: An End-to-End Overview of Red Teaming for Large Language Models | Mar 3, 2025 | Red TeamingSurvey | —Unverified | 0 |
| UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning | Feb 28, 2025 | Large Language ModelRed Teaming | CodeCode Available | 1 |
| Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming | Feb 22, 2025 | DiversityIn-Context Learning | —Unverified | 0 |
| Fast Proxies for LLM Robustness Evaluation | Feb 14, 2025 | Red Teaming | —Unverified | 0 |
| A Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management | Feb 10, 2025 | ManagementRed Teaming | —Unverified | 0 |
| Predictive Red Teaming: Breaking Policies Without Breaking Robots | Feb 10, 2025 | Imitation LearningRed Teaming | —Unverified | 0 |
| KDA: A Knowledge-Distilled Attacker for Generating Diverse Prompts to Jailbreak LLMs | Feb 5, 2025 | DiversityPrompt Engineering | —Unverified | 0 |
| Understanding and Enhancing the Transferability of Jailbreaking Attacks | Feb 5, 2025 | Intent RecognitionRed Teaming | CodeCode Available | 1 |
| Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming | Jan 31, 2025 | Red Teaming | —Unverified | 0 |
| RICoTA: Red-teaming of In-the-wild Conversation with Test Attempts | Jan 29, 2025 | ChatbotRed Teaming | CodeCode Available | 0 |
| Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation | Jan 29, 2025 | Red TeamingSafety Alignment | CodeCode Available | 2 |
| Siren: A Learning-Based Multi-Turn Attack Framework for Simulating Real-World Human Jailbreak Behaviors | Jan 24, 2025 | Red Teaming | CodeCode Available | 1 |
| Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models | Jan 14, 2025 | Red Teaming | —Unverified | 0 |
| Text-Diffusion Red-Teaming of Large Language Models: Unveiling Harmful Behaviors with Proximity Constraints | Jan 14, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| Gandalf the Red: Adaptive Security for LLMs | Jan 14, 2025 | BlockingLanguage Modeling | CodeCode Available | 1 |
| Lessons From Red Teaming 100 Generative AI Products | Jan 13, 2025 | BenchmarkingRed Teaming | —Unverified | 0 |
| Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency | Jan 9, 2025 | Red Teaming | —Unverified | 0 |
| Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models | Jan 3, 2025 | Red Teaming | —Unverified | 0 |
| Diverse and Effective Red Teaming with Auto-generated Rewards and Multi-step Reinforcement Learning | Dec 24, 2024 | DiversityLarge Language Model | —Unverified | 0 |
| OpenAI o1 System Card | Dec 21, 2024 | ManagementRed Teaming | —Unverified | 0 |
| POEX: Understanding and Mitigating Policy Executable Jailbreak Attacks against Embodied AI | Dec 21, 2024 | LLM JailbreakRed Teaming | —Unverified | 0 |
| AI red-teaming is a sociotechnical challenge: on values, labor, and harms | Dec 12, 2024 | Red Teaming | —Unverified | 0 |
| Look Before You Leap: Enhancing Attention and Vigilance Regarding Harmful Content with GuidelineLLM | Dec 10, 2024 | Red Teaming | CodeCode Available | 0 |
| PrivAgent: Agentic-based Red-teaming for LLM Privacy Leakage | Dec 7, 2024 | Red TeamingSafety Alignment | CodeCode Available | 1 |
| Embodied Red Teaming for Auditing Robotic Foundation Models | Nov 27, 2024 | Red Teaming | —Unverified | 0 |
| In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models | Nov 25, 2024 | Red TeamingSemantic Similarity | —Unverified | 0 |
| GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs | Nov 21, 2024 | Bayesian OptimizationRed Teaming | CodeCode Available | 1 |
| LLMStinger: Jailbreaking LLMs using RL fine-tuned LLMs | Nov 13, 2024 | Prompt EngineeringRed Teaming | —Unverified | 0 |
| Audio Is the Achilles' Heel: Red Teaming Audio Large Multimodal Models | Oct 31, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 |
| Desert Camels and Oil Sheikhs: Arab-Centric Red Teaming of Frontier LLMs | Oct 31, 2024 | Red Teaming | —Unverified | 0 |
| An Auditing Test To Detect Behavioral Shift in Language Models | Oct 25, 2024 | BenchmarkingChange Detection | CodeCode Available | 0 |
| AdvAgent: Controllable Blackbox Red-teaming on Web Agents | Oct 22, 2024 | Decision MakingRed Teaming | —Unverified | 0 |
| LLM-Assisted Red Teaming of Diffusion Models through "Failures Are Fated, But Can Be Faded" | Oct 22, 2024 | Deep Reinforcement LearningRed Teaming | —Unverified | 0 |
| Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis | Oct 21, 2024 | Red Teaming | —Unverified | 0 |
| SMILES-Prompting: A Novel Approach to LLM Jailbreak Attacks in Chemical Synthesis | Oct 21, 2024 | LLM JailbreakRed Teaming | CodeCode Available | 0 |
| BiasJailbreak:Analyzing Ethical Biases and Jailbreak Vulnerabilities in Large Language Models | Oct 17, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 |
| A Formal Framework for Assessing and Mitigating Emergent Security Risks in Generative AI Models: Bridging Theory and Dynamic Risk Mitigation | Oct 15, 2024 | Anomaly DetectionRed Teaming | —Unverified | 0 |
| VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment | Oct 12, 2024 | DiversityHallucination | —Unverified | 0 |
| Refusal-Trained LLMs Are Easily Jailbroken As Browser Agents | Oct 11, 2024 | ChatbotRed Teaming | CodeCode Available | 1 |